mProphet issues Tobi  2020-01-13 02:26
 

Dear Skyline Team,

in the attached skyline daily I want to highlight some things I would be happy to get your feedback.

When exporting mProphet features unfiltered it seems to list all potential peaks for a single peptide at various retention times and with different z-scores and p-values, however, the q values for all peak candidates are the same despite different p-values, is that a bug?

Also, the checkbox to filter for highest scoring peak can be highly useful but also lead to errors. Problem here is that when correcting peak picking (by hand or with Avant-Garde) this export still reports the highest scoring candidate but not the actually picked peak. Having this filter ( only integrated peak instead of highest scoring or both) would be very nice as it would deliver desired results for both automatic and manual peak picking.

Best,
tobi

 
 
Brendan MacLean responded:  2020-01-13 08:02

Hi Tobi,
I guess this was originally considered a transitional feature that we ourselves used in ensuring our integrated implementation of mProphet was correct. Great that it has lived on and found new uses, but it is not something we generally focus effort on without a request like this one.

Certainly, we can change the q value column to change non-best peaks to have #N/A, but for now, you can simply make that mental adjustment. The q value only applies to the highest score or lowest p value. It is not possible to assign a q value to the other peaks, because a q value is inherently "about a set of p values", meaning that it is an adjustment to an FDR estimate within a particular set of p values. If we were to assign q values to the entire set p values for all considered peaks, that would produce very different q values than the Detection Q Values you can get in the Skyline reports.

At present, we only output the peaks detected and scored during import, as you have noted. We have never actually done the work to calculate and store even all of the component scores for manually selected peaks. For instance, we never calculate chromatogram-based scores like Shape, Signal to noise, etc. So, we don't actually have them all to export at the time of File > Export > mProphet Features. I always wanted to do this work, so that we could at least assign a Detection Z Score to a manually integrated peak, but we still haven't done it.

The original intent of File > Export > mProphet Features was just to allow normal mProphet testing on the peaks and features Skyline produces during import to ensure our mProphet implementation works like other external implementations. We haven't taken the feature much past that I am afraid, but I am glad it has found new uses.

We will consider your feedback. Switching to #N/A for non-best peaks seems simple enough and if we manage to calculate and store the chromatogram-based features during manual integration (along with othe peak statistics - area, FWHM, etc. - which we currently calculate) then we will consider adding them to the mProphet feature output.

Thanks for taking the time to post your thoughts to the support board.

--Brendan

 
Tobi responded:  2020-01-13 09:19

Dear Brendan,

thank you very much for the fast and extensive response. With your hints, it makes complete sense now. While the technical details related to mProphet might only be interesting for a few people, one crucial thing remains.

Z, p, and Q values for manually adjusted peaks would make a whole lot of sense. Especially since values like area, dotp, idotp etc. change dynamically with changing peak boundaries, the user would expect and rely on the scores following this, at least when rescoring is applied. Having scores for peaks different than integrated ones can lead to heavy errors, misunderstandings and seems quite risky and dangerous to me. This is something the users are not expecting as it is not intuitive at the moment (at least for non-coders, non-statisticians). I also find it really important to preserve the link between quantification and identification, which means both are based on the same peak boundaries for curated and uncurated peptides.

So far thank you for your insights and time.

With best regards,
tobi

 
Brendan MacLean responded:  2020-01-13 10:47

Absolutely. It was part of my original plan to get to those scores on manually integrated peaks. We just never did, and you are kind of the first person to bring it up. It is a good reminder that we still have work to do.

I am also hoping to make z/p/q values more automatic for the default Skyline score when decoys are included in the document, after recently finding that a q value 0.0001 the default peak choices differed from a fully trained mProphet model only by 0.3%, at q value 0.01 they differed by 2%.

Definitely expect scoring improvements in the 20.2 release of Skyline and the Skyline-daily releases leading up to it.

Thanks for your feedback.

--Brendan