Analysis Strategy

support
Analysis Strategy klemens froehlich  2021-01-05 20:03
 

Dear Skyline Team,
First of all happy new year and thank you for your great work !

Yesterday a collegue approached me and asked about detection qValues with Skyline for some peptides he wanted to quantify and now wanted to have some numbers for his collaboration partners how sure we can be numerically that we detected the peptides with our PRM experiments.
I only had experience with peak scoring with DIA data, so I did some reading in the forum and the advanced peak picking tutorial: Similar problem I think is described here:

https://skyline.ms/announcements/home/support/thread.view?entityId=f869bae4-7d09-1032-b0cf-da2025827bc8&_docid=thread%3Af869bae4-7d09-1032-b0cf-da2025827bc8

I came up with some general workflow and I would really appreciate it if you could share your thoughts:

I would first of all generate independent spectral libraries and retention time libraries for all measured targets. For this I would use the integrated PROSIT feature (awesome integration btw!)
Then I would go to reintegration and build an mPROPHET model based on second best peaks
However, when I try to do that, Skyline gives me an error saying I do not have enough targets (0 and 374 decoys), which is ironic because when I check the use decoys then Skyline says I do not have any decoys and I should uncheck the use decoys box......

Since I cannot measure the samples again and we are stuck with the PRM masses isolated during the measurements I was wondering whether I can get Skyline to accept decoys without the 10m/z mass shift usually wanted by mPROPHET. I would not like to use that in the final analysis but I would at least like to look at the comparison between this decoy scoring model and the second best peak scoring model.
I therefore generated decoys, exported them in a transition list, subtracted 10m/z from the precursor, added a "decoy" column filled with "true" and reimported it. But Skyline does not recognize the peptides as decoys but only adds them as normal targets (at least they are not red like my decoys normally are).

I think I screwed up the document somehow, but before I continue deeper into this rabbit hole I wanted to ask whether this is the right direction or whether I should try something else.

I will attach the skyline document.

Best Klemens

 
 
Nick Shulman responded:  2021-01-05 20:27
You have a peptide in your document "RFVLSGGRWEK" which only has a heavy precursor and does not have a light precursor. If you delete that peptide from your document you will find that you can successfully train a model using second best peaks.

Because of this peptide, many of the features that mProphet would like to use are unavailable (they are shown as greyed out an italic) because some but not all of your peptides have the feature available.
The Edit Peak Scoring Model dialog can tell you which peptides are missing a feature. First, select the feature row in the Available Feature Scores grid. Then, select the "Feature Scores" tab. If you hover your mouse below the "Unknown" part of the bar graph, some binoculars appear, and you can click on them and Skyline will list in the Find Results window all the peptides which are missing that feature.
(then you have to cancel out of the Edit Peak Scoring Model dialog in order to actually see the Find Results window).
You will find that "RFVLSGGRWEK" is the peptide which is missing those features and is causing them to be unavailable.

The error that you were seeing happened because Skyline was only able to use the "Intensity" and "Retention Time Difference" scores, because you have that one defective peptide. In the error message that you are seeing, the word "Decoy" is intended to mean whatever you have told Skyline to use, which would either be decoy peptides or second best peaks.

-- Nick
 
klemens froehlich responded:  2021-01-06 04:14
Hi Nick,
Thank you very much for the pointer! Now the scoring works, but the peak picking makes everything a lot worse.
For example if I train an mPROPHET model with everything ticked go to okay in the model window und OK in the REINTEGRATE window and look at the peptide
G.APLATELRCQCLQTLQGIHLK.N [34, 54] (missed 1)
in half the samples the heavy peptide is now picked in the middle of nowhere with the real peak several minutes away.

The default peak picking was much better. Did I do something wrong with the mPROPHET model, or should I not use it in this case?
Of course the PROSIT predictions are not perfect, is that maybe at fault?

How does Skyline handle the following scenario:
I apply a peak picking model and then manually correct the peak picking.
Will Skyline still give me detection q Values I can use, or does this screw things up somehow?

Best, Klemens
 
Nick Shulman responded:  2021-01-06 09:59
I am not sure how well we expect "second best peaks" to work as a decoy strategy. It might be that in order to get Q-values that you can really trust, you need to actually acquire data for decoy peptides.

I don't think there's anything wrong with using Prosit the way that you are. Even if Prosit were to inaccurately predict a spectrum or retention time, it is not going to do so in a way that makes the decoy peak look better than the target peak.

-- Nick
 
Brendan MacLean responded:  2021-01-06 10:40
Note that it is also possible to get q values (and z-scores) by training the default Skyline model. Though, we also can't really speak to the validity of calibrating using second-best peaks for the null distribution with this model either. I have always felt that you should not include retention time difference in your scores when using second-best peaks, because the scores of targets and decoys are not independent for that score when using second-best peaks. That is, if the best peak is always very close to the predicted retention time, the will by definition force the second-best peak to be further away, since they are sharing the same chromatograms.

A model using second-best peaks is definitely at your own risk. I can not point to a solid citation for this method. It was added more because we had heard and seen that it works "surprisingly well" in some cases.

To simply train q values and z-scores for the default model, in the Edit Peak Scoring Model form, under Choose model, choose "Default", instead of "mProphet" (the default). This model uses fixed proportional weights for the feature scores it uses, but training it with a set of decoys will adjust the weights to make the total scores z-scores on the null distribution estimated from the decoys. This will not change the peaks picked by Skyline at all, just assign z-scores and q values.

We are planning on making this much more automatic (when you have true decoy targets) in the next Skyline release, so that more people end up with z-scores and q values on the default Skyline peak picks.

Skyline does not currently assign new z-scores and q value to manually adjusted peaks. We could easily be re-calculating the z-scores, but truly recalculating q values would mean that all q values would change every time you manually adjusted a peak, because q values get calculated on the entire set of targets, unless we were to use some kind of q value estimation hack, like the one employed by Hannes Roest in the TRIC paper (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5008461/). This would be of somewhat questionable statistical validity, and even the statistics around manually adjusting targets but not decoys leaves you with a somewhat unfair advantage for targets. To be fully valid, you would probably need to blind youself to which are targets and which are decoys, and manually review and adjust everything, and then recalculate the statistics.

Needless to say, this is not an area we have tread very far into. We calculate statistics once and report them. If you need to make manual adjustments, then you will need to justify them without in some other way than a statistical cut-off value (e.g. 0.01 or 0.05).

In my experience, most reviewers would agree that having a coeluting heavy standard on the entire y- and b-ion series is sufficient proof of measuring your intended analyte, even when there is no visible, similar signal in the analyte (light) chromatograms. So, when you have a small number of targets and matching heavy standard peptides, you generally do not need to worry about statistical modeling, and you are more likely to cause yourself unnecessary pain in getting a usable statistical model out of such a small number of training points.

--Brendan