Background
For >10 years, we analysed all kinds of samples by flow injection analysis – HRMS. I will not discuss the pros and cons of including a column (LC-MS) or omitting it (FIA-MS). In brief, FIA-MS is great to profile endogenous metabolites from cellular systems, less for the more exotic samples that require in-depth MS2. Believe it or not, we used FIA to profile>1.4 million samples and generating data that used in hundreds of publications by collaborators or us. In less than 1% of the cases, we backed up the MS1-only, no-LC FIA data with targeted or MS2 experiments.
We described the original method using a TOF-MS system in a paper published in 2011 (https://doi.org/10.1021/ac201267k). The relevant details are that the flow rate optimum is about 250 microL/min, chromatographic peaks are ca. 7 seconds wide, and the MS is operated at about 1.5-2 Hz MS1 full scan mode to collect sufficient points over the elution peak (this is relevant for quantification, adduct detection, etc.). Ten years later, the field of high-res MS instruments has evolved massively. We witnessed 2-3 new generations of TOF instruments from most vendors. Most importantly, Orbitrap instruments improved massively in speed, to the point that at the necessary scan speed of 1.5 Hz and above, their resolution is better than virtually any TOF system for the whole mass range from 50 to 1000 m/z.
Given that in FIA the only mechanism to discriminate and separate metabolites is by m/z, one would expect that (higher) resolution is the critical factor to maximize coverage. Thus, intuitively, Orbitraps should be better than TOFs.
We are in the lucky position to own both TOF and Orbitrap instruments. Since the original publication, we tested and compared all types of instruments with real-life samples. Ten years later, we still come to the opposite conclusion: TOF-MS is better suited for flow injection analysis of metabolite extracts. Let’s go through real data to explain this unintuitive result.
A representative test
We used a very representative sample for metabolomics: a polar bacterial extract obtained from Escherichia coli happily growing on glucose at an OD600 of about 1, extracted with ethanol, and injecting about 1:100 of the extract (1 microL) without concentrating the sample.
We used a QExactive HF-X at a resolution of 240’000 (AGC 3e6) and an Agilent 6546 QTOF. Both were operated in full scan mode (75-1000) and at a similar MS1 acquisition rate. Data were acquired in profile mode, extracted to mzML, and analyzed with in-house software with optimised settings for the two instruments (you need to trust me on this). At first glance, the spectra look comparable, if not for the very different range of intensities that are obvious because the two instruments “count” differently.
At the relevant scanning rate, the HF-X eclipses TOF’s resolution over the full mass range.
At first, we measured the resolution on profile data. This was done by picking peaks with continuous wavelet transformation, retrieving FWHM, and calculating the resolution (all in Matlab). The results meet expectations: the Orbitrap resolution is 5-6x higher in the low mass range and about 2x better around m/z 1000. The curves are likely to intersect for higher mass ranges, but this is not of interest for small molecule analysis.
… and yet, the TOF detects many more compounds!
So far, so good. Does higher resolution translate into more detected peaks and more detected metabolites? Surprisingly no, it doesn’t. On the Orbitrap, we detect fewer peaks (3566 instead of 5109 on the TOF). If we adopt a simple annotation scheme matching m/z against an E. coli metabolome database (tolerance 0.5 mDa), with the Orbitrap we obtain 122 putative matches (level 4). On the TOF, the number of metabolites is 3x higher (362). In central carbon metabolism, which covers the most relevant and abundant metabolites, the difference in coverage is striking.
How comes that the Orbitrap has 3x higher resolution, but the TOF detects 3x more metabolites? The reason is dynamic range, in particular in the low end. I go first through the explanation before showing how metabolite detection is ultimately affected.
The role of intrascan dynamic range
TOFs have superior intrascan dynamic range. The ones we use adopt ADCs that provide 5+ orders of magnitude of dynamic range. It is not totally linear, but this is not relevant here. The key aspect is that it allows quantifying peaks over 5 decades or so. The intrascan dynamic range of Orbitraps is limited to 3-4 decades. The AGC does a great job to expand the dynamic range between scan and across an entire LC-MS run, but the intrascan dynamic range is less than 4. In addition, there is a limit “imposed” by the Fourier Transform: the ratio between the largest and the smallest centroids in a spectrum is specified to be 5000, and we observed 7000-8000.
In addition, one should keep in mind that the Orbitrap is a trap and needs to confine ions in a finite space to measure their m/z. As the space is limited, capacity is limited. To my knowledge, the max number of charges is 5E6; at least, it is what the AGC allows to select. Working with that many ions is far from ideal because many problems become visible: drops in mass accuracy, peak coalescence, and so on (good examples are https://doi.org/10.1021/acs.analchem.7b05372, https://doi.org/10.1021/ac500140s, and many more). The vendor recommends operating with lower values (around 5E5 or less).
The issue is that even the max number of 5E6 is meagre. As soon as 2-3 very abundant ions coelute, or maybe 20 moderately abundant ions, they will take most of the free seats in the trap before the AGC closes the gate. Less abundant ions are outcompeted. Their presence in the trap becomes highly stochastic. In the most extreme case, they become invisible. It is a form of ion suppression in the trap, which is exacerbated with complex samples. In practice, limited trapping can further restrict the intrascan dynamic range to less than the specified 1:5000. A simple way of experimenting with this is by playing with the AGC.
The problem is given by the “natural” abundance of metabolites in biological samples
In the case of flow injection analysis of e.g. cellular extract, the dynamic range has dramatic consequences. On the right, we compare the profile data directly around m/z 147. We show the profiles of triplicates measurements. Note that the scale of the y-axis is logarithmic. The striking difference is the baseline quality with the TOF: even the smallest peaks are detected very reproducibly.
On the Orbitrap, however, peaks are thinner but only exist for the most abundant features. Thus, the vast majority of peaks visible on the TOF spectrum do not appear in the Orbitrap spectra. This is because of the reduced dynamic range that sacrifices low abundant ions. The limited dynamic range has a drastic effect on metabolite detection because – in a spectrum – metabolites tend to be the lower quantiles. In summary, the limited trapping capacity (and interscan dynamic range) of Orbitraps prevents detecting the majority of metabolites in cellular/natural extracts. With these premises, there is no benefit from higher resolving power.
This problem always occurs when complex spectra with a wide dynamic range have to be fully characterized. The problem also exists in LC-MS (e.g. with lipids) but is particularly acute with flow injection analysis.
Are there workarounds? In theory, anything that reduces the dynamic range of a sample would help. However, all we could think of compromises speed and throughput.
One example is to slice the mass range in smaller chunks like it was done with BoxCar. We tried with 3, 4, and 5 slices with different strategies. The benefits in terms of detection were marginal and far from what we obtain on TOFs.
Our take is that the differences in dynamic range are simply too extreme to be compensated (a linear fix can’t solve a logarithmic problem). Notably, every additional slice requires an independent scan event in the orbitrap. Hence, to keep the cycle time of 1-2 Hz, the resolution must be reduced to 120k and 60k, thereby losing all putative benefits of the Orbitraps.
Consequences for peak integration
Stochastic trapping of rare ions has one more underappreciated consequence: noisy XICs. The lower is the intensity of an ion, the higher is the risk that it will not be detected. This results in either missing values or underestimated counts (depending on how data are processed). The consequence is that the coefficient of variation (or relative standard deviation, RSD) of low abundant ions increases dramatically. On TOFs, the increase in RSD is marginal. Exemplary XICs are shown on the left of the figure.
Workarounds exist. One is to smooth data but only works if sufficient points have been collected during peak elution. An alternative approach is to assume a given chromatographic peak shape and fit it to the measured points (this is what Compound Discoverer seems to do, with some funny effects). Unfortunately, both approaches only mitigate the problem and fail to work in severe cases such as the one of glutamine (GLN) shown below.
Conclusions
We still haven’t found a workaround to operate FIA-like analyses on Orbitraps at full resolution without suffering from its poor dynamic range. We are all ears for novel ideas. For the time being, we stick to TOFs that deliver many more metabolites more robustly.
Very insightful and beautifully illustrates one of the limitations of traps in general!! A relatively recent workaround utilized sample-specific ion distribution https://www.nature.com/articles/s41467-020-17026-6. In that approach, the authors tailored the scan ranges of the spectral-stitching-like method to minimize ion competition in the C-trap. They used 3 micro-scans which could help with the CV% of low abundant ions. And at 75microL/min flow rate, there was enough time for 8 ranges in both modes. However, as the approach is sample-dependent, it might not be applicable to all types of samples and will depend on the background ions which could vary a lot. More innovations are needed, perhaps smarter AGC on-the-fly optimization or automatic scan range methods based on a “prescan”. In the end, a trap is a trap and has some wonderful applications but also clear limitations.
To be honest, the only reason why the method you cited worked is because they throttled down resolution to 70’000 to preserve an overall fast scanning rate. It’s simple hack, but above m/z 500 the resolution will be worse than on an average TOF.
I am playing the devil’s advocate here, but why would you do the whole optimization and stitching if the best you can get is worse than a TOF?
Good point!! A relevant question is how much resolution is good or good enough for any specific application? Is 50-70k enough? Or do we need the 120-200-240k-500k,1M, etc..at all? Theoretically speaking, there are so many overlapping isobars that differ in different applications, fields, and sample types but practically speaking how much one really benefits from that high resolution to effectively resolve them? what is the metric for this practical threshold? perhaps, it is time for a new metric of this effective resolution value or something like this :).
> Is 50-70k enough? Or do we need the 120-200-240k-500k,1M, etc..at all?
This question can be answered in the most common way – it depends 🙂 If you are profiling a common set of endogenous metabolites 60k is fine. Try to resolve isotopic overlaps in lipidomics or get out of the matrix for neuropeptides – you are done with TOFs, as what you need is 120k at mass range of 500-2000, which will roughly translate into 240k@mz200. Not talking about applications requiring fine structure information.
Analytical method and instrumentation should be fit for purpose, that’s it. Brilliantly demonstrated above by Nicola!
Thanks, Sergey for the comment. Sure, it depends! but on what precisely is the question :)..I have been working with mass spectrometry of lipids and complex samples for more than a decade and saw some clear benefits of higher resolution but only sometimes and mainly with isotope tracing experiments ( e.g resolving the M+2 of the sulfur isotope compared to M+2 of two 13C atoms). Several lipidomics methods operate at 70k as well and do not miss the classical isobaric lipids. However, in general, I get the feeling the resolving power metric as an absolute measure is deceiving a bit as it addresses an ideal case that might never exist in practice. i.e. the more is not always the better. The question is how can one be sure in advance that this resolving power is enough for this sample type (matrix) without relying on rules of thumb, gut feeling, or even personal experience. What is the objective criteria? Can one quantify this benefit from high resolution? and how? Also, would this high resolution still add value on getting good MS/MS in those cases or getting better separation? Perhaps the topic needs its own post 🙂
I completely agree, and on the other hand I am working with FIA-FT-ICR, yet, as also Nicola said, I struggle to detect very low mass range metabolites, in this case ultra high resolution cannot help us for this task
Thanks for this great summary!
I was wondering if you have been looking into the application of micro scans for flow injection analysis, as it has been reported to greatly improve the dynamic range (on a logarithmic scale; https://doi.org/10.1002/rcm.8818)? I guess, that while this would lead to a lower number of observed scans fewer scans might be required as the averaging is simply happening before the FT process.
Hi Yasin, good point. Microscans (aka transients on FT-ICRs) are summed/averaged before FT and improve s/n, noise removal, dyn range, etc. Still: microscans cost time. The paper you cited mentions that “acquisitions varied from a few seconds to around 30 min”. It is a temporal range suited for direct infusion, but not for FIA and for the throughput we need.
Anyway: yes, we tested also FIA with summation of microscans… but only up to a few to keep fast cycle times. It didn’t help much, but I’ll try to retrieve the data and add it to the post (when I have time).
> Alaa Othman
Thermo answered this question at least during some Compound Discoverer webinar. On the order of magnitude we miss 30% of features if going from 60k to 120k and only 3% when climbing up to 240k (@mz200), from what i have in mind right now. For lipid thematic, classical things like PC vs PC-O can already become tight at 70k@mz200, hence resolving power is always good to have – the more the better c. p. Also for unforeseen matrix influences.
> Nicola Zamboni
You are using 250 uL/min for FIA which is in principle a classical analytical flow. Have you evaluated miniaturization? In my experience orbi data become way more consistent from metabolomic point of view in microflow regime (with chromato). Standard HESI-II probe is also quite good for 5-15 uL/min for its spray stability and convenience of handling. I totally believe you that orbi will be still heavily impacted by a sheer amount of injected material, but this step can still improve the situation a bit at least for lower mass range.
> Yasin El Abiead
Microscans will not improve dynamic range significantly on a short time scale now, as it is for the most cases is intrinsically capped by proprietary eFT spectra calculation. The guys from abovementioned article worked on older LTQ XL, which, IIRC, featured magnitude FT to get the spectra + there was no limitation on the number of acquired microscans.
Hi Sergey Girel, I see your point and the capping of the maximum allowed number of micro scans at 10 is indeed very unfortunate IMO. However, I wonder if the FTMS booster from spectroswiss improves this situation as it enables aFT,.. anybody tried to play with that for improved dynamic range?
> anybody tried to play with that for improved dynamic range?
https://doi.org/10.1021/jasms.9b00032
Me too. With some reasonable constraints it is a very powerful tool. But it will not dramatically change the discussed situation as the main cause of pain is the limited trap capacity, ion competition on top of that. Although it is possible with some effort to do reasonable metabolomics with current generation of orbis, the setting of NZ in particular and untargeted metabolomics in general still requires a high-end ToF.
Correct me if I am wrong but it seems to me like the dynamic range improvement is quite impressive as they reach at least 5 oom according to https://doi.org/10.1021/jasms.1c00051, which is about the same as reported for the TOF in the blog above, but with a resolution of 900k@200mz.
Regarding being able to do ‘reasonable metabolomics’ I would argue that it seems to me that Orbis also have some advantages with respect to accurate measurement of intensity ratios (aka better linear dynamic range) as saturation effects are (in all data I have looked at to this day) still a huge issue for TOFs. In fact, in my experience, the linear dynamic range is many cases higher for Orbis than for TOFs when this is taken into account. However, I see how the not-linear dynamic range seems to be higher for TOFs. Would be curious if you have different experiences there.
> Yasin
First, let’s not mix the sweet and the soft. Such feature as dynamic range (as seen in the obtained data) always has two aspects: there is a sample and a machine. Sample contains analyte X with concentration [X] in matrix (X) and analyte Y, [Y], (Y), where [X]>>>[Y]. Machine, on the other side, provides detection of X and Y together in a certain range, ideally covering both [X] & [Y]. Aminoacids in human plasma are slightly different from uranium isotopes in neat solution, i guess you see my point here.
Second, as it has been already pinpointed above, main problem of the today’s orbitraps is a limited trap capacity. Ion current generated by a HESI-II or NG source can easily reach around 1e9 ions/scan, whereas your sampling capability with orbi is highest 5e6 (or 1e7 on old machines, where this was available). So here we can assume already a loss of two orders of magnitude due to ion competition, statistics, etc.
Third, there are processing and sampling penalties. On one side eFT caps dynamic range due to several reasons, mainly data volume and to facilitate on-the-fly processing. On the other side there is an instrumental detection limit, which is about detecting a certain frequency in the resulting mix of image currents. So, to be able to recognize a very weak signal in the forest you need to get enough samples for a reliable S/N ratio.
From this point of view for uranium situation can look like the detection of 1 235-U among 1M of 238-U ions inside 3e6 TIC. I am making up the situation a bit, of course, and sacrificing scientific correctness to be simple and illustrative. It’s easily 6 orders of magnitude of dynamic range, and the problem is only to open up a possibility for an extended sampling and improve detection a bit to get this one ion out of noise. Which is perfectly done by Booster with aFT + long transient. But let’s check what is going on with human plasma: you have, let’s say, GLU vs GLN at 1000:1 + matrix/solvent/etc ions on top of it, with a totality of ion current at 300e6 let’s say. So you sample 3e6 from 300e6 and what you see next – we are at 10:0.01. GLN is lost. It is simply either not in the trap or under capping if we have something like PHE at 1e6, which gives us 1000:1:0.001 ratio after ion injection event, assuming uniform sampling. Look in the article, it’s perfectly and quantitatively mentioned there: “two distinct differences between two AGC settings …..”. So if we are lucky and the ions are in the trap, we will improve. If ions are not there – they are not there, deal with it.
You are absolutely true on better linearity with orbis and no saturation effects. But how is it related to the outcome of metabolomic analysis? In metabolomics we need a comprehensive metabolite map, which is lost. Or ratios of concentrations are wrong. That’s why i am speaking the word “reasonable” quite loudly in terms of metabolomics on orbis. It is not a trivial thing and requires a lot of effort.
Btw, you can correct saturation on TOF’s at a price of increased experimental error. TOF machines have another intrinsic problems, but for this certain application they are definitively a better choice today.
> Sergey
You clearly understand the data processing happing behind the hood of the Orbitrap in great detail, so thank you for the illustrative explanations!
Regarding the calculation, which you based on the number of ions in the trap (1e9 ions/s in the spray vs 1e7 or 5e6 in the trap) I have been told by people from Thermo that the value of the AGC target is in fact only correlating with the number of charges in the trap, and not the actual number of charges and that the change from 1e7 to 5e6 is therefore not based on a ‘real thing’. I would be very interested if you have different information there?
I do see that ions that are not present in the trap cannot be measured and that in the case of complex samples the > 5 oom won’t be reached using the approaches discussed. I would still be interested in how much improvement there would be. However, as said I appreciate your explanations there and I expect now less from those methods in this regard than before.
On the topic of how the linear dynamic range is important for metabolomics I can think of a number of things. First, the linear dynamic range can be very important to increase the reliability of annotations, via the utilization of isotopologue patterns. I know that for Nicolas FIA method (which I actually think is very hard to discuss/comment on without also considering the downstream data analysis) you have to go all-in on the concept that an annotation (and really everything you see in metabolomics) is merely a hypothesis. However, if you need to place more trust on analytical annotations much of this TOF vs Orbi for FIA discussion shifts IMO. While I do not have much TOF experience I imagine many of those very low abundant ions will be hard to identify (especially in FIA). So to play devil’s advocate: what is it worth seeing them if you don’t know what they are? The answer is of course hypothesis generation, but you need a very well-thought-through method (involving extensive knowledge of your biological system) to make sense of these kinds of signals. I would therefore tie the conclusion that TOFs are better for FIA strongly to the exact way the data can be analyzed which might change from biological system to biological system.
To be clear, I think Zambonis FIA-TOF method is great! And it indeed seems to me that a TOF is the better choice for his exact workflow. However, from the points brought up here I don’t see how TOFs are in general a better fit for FIA of metabolic samples of any biological system. However, I am very open to corrections 🙂
> I have been told by people from Thermo that the value of the AGC target is in fact only correlating with the number of charges in the trap
First, yes, this number is about charges, not ions. But again, I am simplifying – if we are in small molecules world, have you often seen a typical metabolite with +3 charge state? 🙂 Then, about a real thing, I’ll put some backfire in a question: where do we have more charges, inside or outside of the trap? Hint – you can look at TIC values on different instruments/ion sources, or even with some charge meter and get an impression.
> how much improvement there would be
There will be a good improvement in right instrumental conditions. I cannot say more without shooting my own knee, so I deeply apologize for such an answer in advance. I’d not challenge TOF in detrimental modes like FIA or ballistic gradients, but otherwise it’d be fine.
> First, the linear dynamic range can be very important to increase the reliability of annotations, via the utilization of isotopologue patterns.
If isotope is simply missing because of competition problems, how reliable isotopic pattern can be? Btw, isotopic patterns start getting more weight in fine structure analysis conditions. In MS for mortals you properly annotate in a complete different fashion 🙂 And, yes, orbitraps are way better in this sense, but because of completely different story.
> you have to go all-in on the concept that an annotation (and really everything you see in metabolomics) is merely a hypothesis
In the real world it all ends with a biological confirmatory experiment. So it does not really depend how you got the data, main point you got it and your annotation process is consistent enough to produce a resultative pathway analysis. This is, again, a different story.
> strongly to the exact way the data can be analyzed
Data analysis here is a completely known and recognized metabolomic pipeline. Nothing special favoring TOF.
> I don’t see how TOFs are in general a better fit for FIA of metabolic samples
To summarize – count with loss of at least 40% coverage with orbis. If you do it usual way.
You are right in that most of the other things are a different story (which is however very much intertwined with this one,..). So for the sake of not discussing here to the end of time lets better not go there,..
However, thanks for your perspective, enjoyed the discussion!
Yasin
First, I would propose to keep this exchange as an honest scientific discussion detached from any possible vested interests which are surfacing in the spill-out of this post in some social media.
Currently, Orbitrap mass spectrometry is implemented in general use instruments intended mainly for sensitive LC/MS and LC/MS/MS of complex mixtures. This is where it takes advantage of higher resolution (as illustrated by Nicola) and sensitivity (>2 orders higher than in oaTOFs as seen from the first figure- even if we take into account that the actual charges/sec is several times lower than the NL intensity in QE-HFX MS) as well as better peak shape and lower chemical noise.
However, when it comes to special applications like FIA, direct use of standard techniques is not always the best strategy. Indeed, it is the trapping principle of the C-trap that makes undesirable the brute-force approach (i.e. “take as many seats as you want”, as Nicola alluded).
A several-fold improvement of “brute-force” dynamic range could be indeed achieved using many microscans instead of standard reduced-profile spectra (as the former utilize summation of transients prior to processing while the latter cut off everything below a certain threshold so the later integration over FIA profile does not help). Unlike what some comments stated, there is no problem with utilization of microscans with Enhanced FT on the latest instruments. It could be mentioned that full profile data (as provided e.g. by Spectroswiss) also allows to integrate over the duration of FIA elution profile. In either case, gain in dynamic range as a square root of the number of spectra is expected.
However, the real solution to the “logarithmic problem” could only be intelligent filling of the C-trap, i.e. introduction of “a social policy to benefit under-privileged”. It could be organized in the following way: a standard full-MS spectrum is acquired (maybe, even at a lower resolution as there is little chance of intense peaks to be isobaric) and then data-dependent mxSIM (multiplexed SIM) scans are taken. Unlike standard mxSIM scans, these scans should cover gaps between, say, top-20 or top-50 peaks. Given high resolution/long transient duration and multiplexing e.g. 10 gaps per spectrum, each gap will get order(s) of magnitude longer fill time comparing to the standard “brute-force” approach described in the blog. Although FIA elution is pretty fast, we still could fit several full cycles over its duration and hopefully break new grounds in depth of analysis!
Although this particular protocol is not in the standard toolbox of the older Q Exactive family, it could be programmed using Application Programming Interface available for free to all volunteers https://assets.fishersci.com/TFS-Assets/CMD/posters/PN-64428-LC-MS-API-Controlled-In-Depth-Proteome-ASMS2015-PN64428-EN.pdf . We are open to collaboration on this matter!
Thanks a lot Alexander for enriching the scientific discussion!!! I really appreciate it. Modified dd-mxSIM, as you describe it, is indeed an elegant solution. That is very similar to what I had in mind in the first comment about “automatic scan ranges based on a pre-scan” so I love the idea :). However, this will be at the cost of cycle time as Nicola mentioned e.g. at 240k resolution of the Q-Exactive HFx (similar to FIA scan rates on TOFs), we can easily reach cycle times of a few seconds with 3-5 ddmxSIM spectra. This will translate to less number of points for the same m/z over the FIA elution peak, which might affect downstream processing. Having said that, in general, I think the idea of even more rational C- trap filling is very intriguing and perhaps has applications beyond the specific FIA case.
I am learning a lot. Many thanks! My take: (i) we are not doing anything wrong, (ii) for the extreme case of FIA with complex samples [it’s a niche, don’t extrapolate!] we hit a nerve on the C-trap, (iii) intelligent filling is the key.
**
In this regard, I found our old data where we compared multi-segment measurements. [We also tried multiple microscans, but since the main limitation is the C-trap, there was no benefit in summing transients before FT]. We went from 1 to 5 m/z segments, trying our best to slice it up to reduce aforementioned problems. In practice, we used m/z windows of different width (manual tuning). Increasing number of scans, has the expected effect on the number of features. With 5, we got similar coverage on the expected compounds. Obviously, this implies 5x lower scan rate and more interpolation. Newer generations (480 and Tribrids) will do anyway better than our HF-X. Not great, but a starting point.
**
As Alexander perfectly explained, the key is to build on the strengths of the Orbitrap, rather than emulating TOFs. Different architecture, different solution. That is to leverage on its fast DDA engine (to reduce competition in the C-Trap across the dynamic range, the “social problem” mentioned before) and adopt a more fluid use of resolutions. This is even simpler with FIA because the relative distribution of peaks is largely preserved over the 5-7 seconds of the elution peak. Hence, a couple of very low res (15k will do) at the onset of the peak will be enough to identify the top-n features over the mass range, and use the information to define optimal segments for the rest of the MS acquisition. A second strength is to drop resolution for some of the low m/z segments [no, nobody needs >60k below m/z 200 for FIA applications]. Massive time could be saved.
**
Where are the challenges? Downstream stitching of scans with different scan speeds is not a big deal. The challenge is to implement the DDA. I assume that the IAPI would allow to read the spectra and define on-the-fly the segments and resolution for subsequent scans. But then comes the optimization problem is finding the right number of tSIM (mxSIM) and resolution. It’s a nice optimization problem that stimulates my mind. Simon Rogers & co. had some inspiring work on a Fusion (Davies et al https://doi.org/10.1021/acs.analchem.0c03895), but here we don’t have the time to sample and simulate on-the-fly: we need an instantaneous approximation to instruct the MS within milliseconds. Interesting problem, much beyond FIA.
**
The first idea I had: There is no time to sample and simulate on the fly, but we could to the full optimization on the 1+ Mio FIA runs we have in our hands. Build a slow optimization in advance. With the results, we would then train once a fast ML-predictor that – given the list of top-n peaks as typically used for DDA – it spits out optimized segments width and resolution settings. Bam! Problem solved :-).
**
Apart of FIA, there is much more one could do in the context of fast gradients, and using better MS2 time. MealTime-MS is a nice example (https://pubs.acs.org/doi/pdf/10.1021/jasms.0c00064). The Orbitrap IQ-X is a great step in the direction, but there’s much more one could do and think of.
**
I’ll play with the idea and data. I am not sure whether the IAPI for the HF-X allows for all of this. The SLA tends to be very restrictive to not interfere with instrument sales.
**
Many thanks again!
Excellent idea, Nicola!! and being in your lab for the last two years, I can tell, for sure, more brilliant ideas will emerge, once people get their hands dirty with the lab’s huge dataset, ML and the IAPI :). That will also clearly push the applicability beyond the FIA case towards the general case as any individual LC peak is de facto a FIA peak for all co-eluting metabolites/lipids/proteins… A perfect project for collaboration!!
Indeed, tremendous inputs! Its a sheer pleasure to have such a dialogue.
Would it make sense to add an additional constraint as an expected response for prioritization – if we are talking about biofluids? Expected concentrations and content of those are quite known. Response factors are also available or can be estimated (zB Liigand et al.).
Great thx for your contribution, Alexander!
There is no problem with utilization of microscans, indeed. “Problem” is the number of those is capped, at least on QEx & QEx Focus (n=10). IIRC the mentioned article, they did 1000 u-scans. I perfectly understand the underlying reasons, however, the point was that such an experiment is not always possible in standard setting due to technical limitations.
The idea looks great and also resembles quite a lot my personal experience with methods, where some particular low abundant analyte should’ve been scratched out of the middle of chromatogram by introduction of separate SIM scans. Again, analysis of peak distributions similarly to https://doi.org/10.1038/s41467-020-17026-6 can also be done on the fly to get the most of such an approach.
I’d only stressed a tiniest bit if there would be issues to stitch together resulting scans to get correct signal ratios for a full picture.
Thank you very much, Nicola and Alaa, for your constructive proposals and excellent references!
Indeed, it makes perfect sense to examine the collection of the past FIA scans to understand what thresholds make most sense for data-dependent decisions- and where are sweet spots for features (e.g. it might make sense to further narrow down SIM windows in such regions).
Theoretically, the gain in dynamic range will approach the ratio of TIC in FullMS scan to TIC in SIM window- but in reality, presence of other windows on mxSIM, limited duration of the transient, etc. will reduce this gain. In practice, the shorter is Ion Time (IT) for FullMS (this is where HF-X gives an advantage comparing to earlier Q Exactive MS), the more freedom we have in selecting IT for each of SIM windows.
Another aspect relates to Alaa’s concern about the number of points across FIA elution profile. There is no question that it would make sense to acquire at least 7-10 FullMS spectra across FIA peak (which will become easier when done at lower resolution). However, as all m/z are supposed to follow the same elution profile, will there be any gain in having more than 1-2 acquisitions per each SIM window (with subsequent scaling of intensities according to their position on the profile)? If not, then this increases the number of available SIM windows several-fold, with more freedom for their selection…
Concerning API flexibility, its development followed the principle: “Whatever a user could do in Tune program, could also be programmed by API”. So hopefully this gives enough flexibility- however, in the unlikely case if anything is missing, we could discuss a special license…
Thank you Alexander. Great advice on the mxSIM. I see it coming together, but there are more complications. The devil is always in the detail. For instance, the assumption that the spectrum is constant over the FI elution profile doesn’t hold. Matrix and non-homogeneous solvent composition of the injection plug create a dynamic in the ionization. This is particularly acute in the sharp front of the injection. One would still have to cycle more frequently between the quick survey scan and optimization of segmentation and IT gating for multiplexing. If we extrapolate too much, expectation will diverge from reality, with a negative effect on gain.
*
The speed of making smart decisions remains the crux. As usual, my first idea about the smart DDA had several flaws. I didn’t think it through. As the trapping is predictable, the optimization problem could be staged efficiently and would require only a small, ad-hoc training set. Getting a fast engine for real-time operation is the intriguing part. I am sure there are temporal overheads (delays) here and there. It will require some testing to see how many milliseconds are indeed available to decide what to do.
*
What bugs me the most is that FIA is not the right scenario for such developments. We are hypothesizing about implementing a novel type of intelligence for DDA and test on an instrument that remains too slow (e.g. compared to a 480 and all of the Tribrids). Even if we succeed, the gain in terms of research capacity would be zero (because our TOFs do already great). Maybe we could produce a paper for mere demonstration, but I prefer to invest resources on other challenges that offer clear benefits. These would be applications for which the HF-X is already the best in class for emerging applications. Two such hot topics in our lab are (i) rapid LC-MSMS gradients (2-3 min in total, with peaks of 1-2 sec, with no time for DIA), or (ii) the rapid collection of fine isotopic structure for quantitative 13C metabolic flux analysis. Improving intelligence in such cases would have an immediate and substantial impact on research capacity for years to come.
*
This brings the focus away from FIA and to LC-MS acquisition. I eluded to it in my previous post, but I’ll be more explicit. MS hardware has improved massively over the past decade. Software hasn’t. The key for the next decade is instrument intelligence, for all instruments, for all vendors. I find it a fantastic problem. It’s about finding a balanced mix between brute-force (dumb), intelligent, pragmatic, and approximative approaches to maximize information retrieval. On-line searches are just a start. In metabolomics and lipidomics, there is much more that could indicate “information”, “spectral quality”, or “this is known” to drive DDA.
*
The issue with the IAPI is the SLA and its clause on Restricted Work. It incentivizes incremental work (doing something that takes short and is not too cool) but penalizes disruptive ideas.
Anything successful concept could be “terminated” because – down the road – it might overlap with novel features embedded in the newest generation of instruments. For a principal investigator and team leader like me, the conditions imposed by the SLA make it impossible to plan or synergize. It’s a killer for ambitious projects that are likely to require 1-2 years of hard work.
It’s a pity because I feel we are missing out on tackling grand challenges: enabling large scale studies, annotating ALL features in a sample, obtaining interoperable digital maps of human cohorts (think of genomics!), etc.
*
I apologize for drifting from the main topic of increasing dynamic range. Coming from the application world, I could not resist stressing the importance of context and vision. We’ll go back to studying the nitty gritty details in the coming weeks.
Thanks a lot Alexander for bringing the discussion to the next intellectual and technical level.
As Nicola elaborated, the solution to the extreme FIA case could be interpreted in larger-context solutions implemented for fast LC-MS/MS where our interest resides and a larger benefit can be expected. Therefore, the number of scans per elution peak matter even matters more, in reply to your question.
In the LC case, the inter-scan dynamic range of full scans in most orbitraps, that I worked with is excellent, is comparable to QQQ operated in MRM mode with approx. 4-5 orders of magnitude and LOQs in the high atto-moles injected on the column. However, the intra-scan dynamic range is rarely above 3 orders of magnitude (as published in your landmark paper in 2006 and demonstrated by Nicola here for the FIA case). Although one might think that LC experiments on Q-Orbitraps would not benefit much from an increase in intra-scan dynamic range, the BoxCar paper by the Mann’ group in 2018 shows the substantial gain in “deeper” coverage on MS1 level by expanding the intrascan range from 3 to 4 orders of magnitude.
The case we are aiming for is even more complex that the BoxCar proteomics case, due to several reasons without going into so much details e.g 1) the heterogeneous physicochemical landscape in metabolomics and lipidomics samples where usually there is little correlation between abundance and physicochemical properties 2) In human plasma, which is the dream goal , we expect 10 orders of magnitude of metabolites/lipids concentrations 3) Most of the new interesting biology and biochemistry are expected /shown NOT to occur in the upper three order of magnitudes of that 10-decades range so one order of magnitude deeper makes a big change ( not to mention solvent and background ions that can limit that effective upper range even more 4) for LC application, the cycle time is at best between 200-1000 milliseconds. Therefore, the FIA case despite being complex and extreme is the toy example of the challenge in the fast LC case 🙂
In the ideal scenario, a single smart dd-mxSIM scan should cover all intensity gaps and act as an “intensity low-pass filter based on m/z” of ions in those ion void-regions between the high abundant ions. To achieve that single mxSIM, a really intelligent MS1 acquisition is needed that is robust and reproducible across ion distributions finding the optimal number of multiplexing, isolation windows and several other details related to quadrupole performance, IT etc. That is one part of the MS1 intelligent acquisition that is being scratched on the surface here but coupling this to the ideas that Nicola mentioned on more comprehensive intelligent acquisitions on both MS1 and MS2 levels will bring the discovery potential of orbitraps to a full maximum and builds on its inherent strengths.
Having said that, I totally agree with Nicola that intelligent instruments everywhere will be an essential part of our future, but I am still hoping that new breakthrough developments in mass spec hardware ( e.g. scanning quadrupoles, combination of orbitraps and TOFs etc.) will make mass spectrometry even more existential tool in life sciences/healthcare across its whole spectrum of applications.
Thanks again for enriching the discussion. I am really enjoying it a lot and hopefully one day, we can achieve our goals.
Thx Alaa for a good summary on biological dimension on the issue!
Nicola says it’s 1-2 sec/peak (baseline or FWHM?) – how do you arrive to such cycle time? I guess we still target at least 5-6 points/peak for a good deconvolution. Then we are at max 400 ms.
Here u-scans help greatly – more of those, better statistics. Exploris shines with semi-targeted profiling and 120k/2us or 60k/4us if matrix is not complicated. “Normal” LC, of course -)
Excellent article and comments, well done! I guess your observations may be extrapolated to FT-ICR-MS and to direct infusion analysis, correct? This raises questions about the performances of direct infusion-FT-ICR for metabolomics of complex biological extracts. What do you personally think? On another note, I am looking forward to your post on LC-MS based metabolomics and why you might prefer Orbitrap-based than TOF-based technology in this particular context.
Thank you Gaetan. Yes, all data I collected on FT-ICR-MS (from two MS vendors) point to the same issue. Since the number of charges are even lower than in the C-Trap, the problem is more acute. I’ll add the data if I find the time. The LC-MS post is in progress, but is likely to be ready only after my loooong vacation.
*
FYI, the next post should be about the development of MSNovelist (https://www.biorxiv.org/content/10.1101/2021.07.06.450875v1), to foster a discussion on de novo structural elucidation.
Thank you Nicola for the confirmation. Below a link to an interesting article by Cole and co-workers which addressed this exact issue:
https://analyticalsciencejournals.onlinelibrary.wiley.com/doi/full/10.1002/jms.4613
Since FT-ICR was mentioned, we can extend to other traps.
What about the timsTOF? Once I was told that the limit is 3E7, but I don’t have data to evaluate the consequences with real samples.
What about the Zeno trap? No, no issues are expected. It’s trapping only very briefly the ions in sync with TOF pulses (10 kHz?). In addition – if I got it right – trapping is only activated with MS2.
TIMS has 5e6 charges capacity, at least what i remember from the available settings.
Same saturation issues, but better cycle time should be possible due to parallelization of IMS separation and ion collection.
I think Laurent Bigler has a TimsTOF (Pro?), correct? And his lab is near yours. A comparative assay between your Agilent QTOF versus TimsTOF (operated in diaPASEF mode if feasible?) would be interested. On our side we have a Synapt XS, not supposed to suffer from space charge effects but still adding the IMS dimension, albeit with supposedly lower resolution than the TimsTOF. Not aware of any lab which already holds the Zeno TOF…
vey interesting blog post. I am very curious as to your thinking about the effect of what you describe for very low abundance samples – say 50 cells or very dilute matrix. Do you think the limit of the trap will be less of an issue in such cases? Do you think perhaps it will be the opposite, with only most abundant ions detected. Also, why do you think this is not observed with LC in single cell proteomics – for example this paper: https://doi.org/10.1038/s41587-022-01389-w