Q: I am doing MRM with MS3 scans, and I would be interested to know whether Skyline could handle such data?
Ans: We have never implemented this. It was requested a long time ago for targeted phospo proteomics on an LTQ, but not much since. If you get us a Skyline document and some raw data, we may be able to figure out a way to implement this. But, we would need this kind of example.
Q: How many fragment ions are necessary or preferred to confirm a peptide identification?
Ans: There is not an absolute number of fragment ions required. The more the better, but there are other factors that help you to increase the confidence of peptide identity like correlation of intensities between the endogenous signal and the library, expected retention time, and/or co-elution with the heavy standard.
Q: How can we confidently design a PRM that assures we are achieving 8-10 points across the chromatographic peaks for peptide quantification?
Ans: First it is necessary that you know the average peak width in your chromatographic conditions and then, you need to aim for a cycle time that is the peak width divided by 8-10.
Q: There are situations where we'd like to monitor both the MS1 and MS2 peaks in PRM (i.e. we alternate full scan with PRM during acquisition). We have issue where MS1 peak shape is very choppy in the Skyline profile (vendor software shows smoother profile). How should we optimize MS1 settings in Skyline - or instrument acquisition - so that peak shape is more comparable to transition peaks?
Ans: Please submit this request to the Skyline support board. Once we have your data set we can look at what is going on more closely and figure out a workaround or a fix.
Q: Do you notice a big difference in selectivity between 240k vs 450k resolution for the MS2 scan?
Ans: Theoretically there should be a difference in selectivity between 240k and 450k resolution. However, we normally do not go beyond 120k because the cycle time increases significantly. Note that in the Lumos, the orbitrap needs 0.5 s (1 s) to acquire a MS2 scan at 240k (450k).
Q: For a very low abundant protein (20 copies/ cell) and 2 sec fill time what AGC target do you recommend to set? Do you notice overfilling of the orbitrap (charge repulsion)?
Ans: We would use an AGC of 1-5e5 for the orbitrap to maximize sensitivity and avoid charge repulsion. Despite the long fill time, it is likely that you reach the AGC target before.
Q: Is there a reason for setting maximum injection time to 118 instead of 120 or 128ms?
Ans: We use 118ms according to the vendor specifications table. We need around 10 ms to move ions around the instrument.
Q: Having different resolutions for the MS2 scans (as shown in the example in the presentation) does not cause importing problems for Skyline?
Ans: No. We now usually recommend importing centroided data which works well for Orbitrap data. Once the spectra are centroided then extraction is done using a PPM mass accuracy value. You could also extract from profile spectra, if you really wanted to. There you would want to give Skyline a low enough resolution that any of the resolutions you used would work.
Q: After importing the data the presence of the peptide is indicated by green/red/yellow dot in skyline. Sometimes these dots do not appear for some peptides or for some transitions. What could be potential problems?
Ans: This usually indicates that no spectra were found that matched the target precursor m/z or in SRM one transition can be missing because there is not a match between both Q1 and Q2. It may mean that the PRM precursor m/z targets you used were not what you thought they were. If you look at the raw data and feel confident that you acquired spectra that match the target with no matching chromatograms, then you should post this to the support board and arrange to get us your files so that we can take a closer look at the problem.
Q: Do you have a preference CID vs HCD? (would you consider ETD insensitive?)
Ans: We prefer HCD for the orbitrap in the Fusion Lumos because of speed (see Espadas, Guadalupe, et al. "Evaluation of different peptide fragmentation types and mass analyzers in data‐dependent methods using an Orbitrap Fusion Lumos Tribrid mass spectrometer." Proteomics 17.9 (2017)). We have not tried ETD with PRM.
Q: The cut-off values of 0.9 corresponds to 1% FDR as mentioned during the talk. Could you please elaborate on this relationship.
Ans: The 0.9 value is the score provided by PeptideProphet which is dataset-dependent. In our case the report of PeptideProphet indicated that this score cut-off corresponded to FDR < 1%. The cut-off score needs to be adjusted depending on the search engine used.
Q: How do you deal with retention time shifts, both from an aging column, or after installing a new column?
Ans: We address this issue by scheduling windows of +/-5min (90min gradient) and by strictly controlling the performance of the system, specifically the chromatography stability, by the use of quality control samples (see http://journals.plos.org/plosone/article/related?id=10.1371/journal.pone.0189209)
Q: What is the difference between external and reverse calibration?
Ans: Both cases are external calibration curves, but to avoid the problem of obtaining a blank matrix, in the reverse calibration curve, the amount of light peptide is fixed and there is a dilution curve of heavy. In contrast, in the standard external calibration curve, the fixed amount corresponds to the heavy peptide, while the dilution curve is done with the light version (see Webinar #13).
Q: How many peptides we can monitor in a common PRM experiment in a Lumus instrument? Thanks
Ans: We have monitored up to 90 peptide pairs (heavy and light), making a total of around 180 peptides after scheduling optimization in a 90 min gradient.
Q: If your signature peptide for quantificaiton have a larger miscleavage version, how would you do quantification? Especially when you don't have the isotope labeled version of the miscleaved peptide.
Ans: You need to assess the digestion reproducibility and perform relative quantitation. We have previously explored these type of problems in a recent study from our laboratory (see Chiva C, Sabidó E. Peptide Selection for Targeted Protein Quantitation. J Proteome Res. 2017 Mar 3;16(3):1376-1380. doi: 10.1021/acs.jproteome.6b00115. Epub 2017 Jan 30. PMID:28102078)
Q: Can the SRM and PRM quantitation be done with out building the library
Ans: Yes, but then make sure to have heavy internal standards for confident peptide identification.
Q: Why not include MS1 chromatogram extraction?
Ans: This is a real option, which is becoming popular.
Q: How do you generate the pep.xml files that are required for the spectral library?
Ans: We have used the TPP pipeline for database search, but you can use many other peptide identification file formats with Skyline (see https://skyline.gs.washington.edu/labkey/wiki/home/software/Skyline/page.view?name=building_spectral_libraries )
Q: On a differnet topic: I tried to download sMRM from API 6500 and I can't see the chromatogram, when I read questions and answers from skyline I understood it has to be in mzML format. I converted to mzML using msconvert and still not seeing the chromatogram. Any suggestions please
Ans: It is not correct that you need mzML, but since you have used msconvert, that means you installed ProteoWizard, and that means you can right-click on your raw data file in the Windows Explorer and open with SeeMS to review the exact chromatogram Q1 and Q3 pairs that are in the file. You should take a very close look at those m/z values and see how well they match the ones in your target list. Chances are they don't match quite well enough. Did you export your transition list from Skyline or some other tool? Continue the discussion on the Skyline support board, if you feel strongly that Skyline is in error.
Q: What gradient length do you normally use?
Ans: We normally use a 90 min gradient.
Q: How do you deal with methionine oxidation (and other PTMs) for peptide and protein quantification?
Ans: We normally assess the digestion reproducibility and perform relative quantitation with these type of peptides. We have previously explored these type of problems in a recent study from our laboratory (see Chiva C, Sabidó E. Peptide Selection for Targeted Protein Quantitation. J Proteome Res. 2017 Mar 3;16(3):1376-1380. doi: 10.1021/acs.jproteome.6b00115. Epub 2017 Jan 30. PMID:28102078)
Q: What method you used to measure the exact standard peptide concentration?
Ans: The peptides are commercially available and we rely on the quantification provided by the vendor.
Q: How deal with sample with very small amount of peptide (just up the background or none detection for exemple with control groups)?
Ans: We normally integrate the background noise using as reference the heavy standard elution retention time, to avoid the problem of having missing values.
Q: When using heavy standards, do you always multiplex, i.e. heavy and light peptide are trapped sequentially with same fill time, and then fragments from both are analyzed together? Or is it ok to capture/analyze heavy and light separately, i.e. fill times might differ? I assume in the latter option, there is normalization of the signal based on fill time done by the instrument. Thank you
Ans: This is a very good question. When we do relative quantitation, we analyze the heavies with 15k and its corresponding fill time, whereas we use higher resolution and fill times for the endogenous versions. However, in some cases this results in a non-optimal parallelization of the instrument time.
Q: Could you comment on how you do the quantification in cases where you only have crude but not absolutely quantified heavy reference peptides? Would you still calculate the ratio of light/heavy for each peptide and use that ratio for relative comparison between samples? Would you need to consider this in downstream processing/statistics somehow (e.g. log-transform)? Many thanks to the presenters and the whole Skyline-team for all your efforts and the great webinar!
Ans: The comparisons are performed by i) considering all the available transition peak areas for a peptide or protein, ii) optionally dividing by a normalization standard, iii) taking the log, iv) averaging any technical replicates and v) performing a t-test on the resulting values.