Data Normalization with Heavy Peptide along with Global Standard

support
Data Normalization with Heavy Peptide along with Global Standard Dhaval Patel  2025-03-31 15:12
 

Hi Skyline Support Team,

I'm a new user learning targeted proteomics data analysis with Skyline and have some questions about normalization approaches for my experiment.

My current setup includes:

  • Around 2-3 peptides per protein
  • Around 2-3 transitions per peptide
  • One transition for each labeled peptide (for chromatography and instrument variation normalization)
  • BSA protein as a global standard for sample processing normalization

I would appreciate your guidance on the following:

  1. Is it possible to normalize multiple unlabeled transitions (e.g., Transition A & Transition B) using a single Heavy Labeled Transition (Transition C) in Skyline? As an example, can we normalize transition A over C and B over C for the proteins relative abundacnes analysis in skyline?

  2. For proteins where we don't have labeled peptides, can we use labeled peptides from other proteins as normalization references? If yes, then could you please advice how to process that in skyline?

  3. We're using BSA as a global standard for sample processing normalization. Is there a way to implement a two-step normalization process where we first normalize all samples using their respective BSA peak areas/ratios, and then apply a second normalization using the heavy labeled peptides?

  4. When reviewing protein abundance reports ( Simply Control vs Treatment normalized Peak area or area ratio), how can I determine the calculation method used (e.g., whether Skyline is averaging or summing the responses from each peptide's transitions)? Is there a way to customize these calculations?

I'm happy to share my Skyline documents if that would help in addressing these questions.

Thank you for your assistance. I look forward to your response.

Best regards,

 
 
Nick Shulman responded:  2025-03-31 16:38
1. If your light and heavy peptides have completely different sets of transitions but you still want to normalize by dividing the total light area by the total heavy area for that peptide then you should check the "Simple precursor ratios" checkbox on the "Quantification" tab at "Settings > Peptide Settings".
However, this would be an unusual thing to want to do with peptides.

2. If you want to normalize by dividing by the area of a completely different peptide then that is a "surrogate standard". There is some information about surrogate standards here:
https://skyline.ms/wiki/home/software/Skyline/page.view?name=Surrogate%20Standards

3. I am pretty sure that the normalization method that you are describing where you divide first by the global standard area and then by the heavy area does not make mathematical sense. That would be similar to dividing by the square of the normalization factor, which also would not make sense.

4. As of Skyline 24.1, the value that you see in the Document Grid "Protein Abundance" is calculated by summing the transition areas. When you are customizing your report there is a different column "Protein Abundance Transition Average" which you could have on your Report instead.

You can learn more about the Document Grid and Custom Reports here:
https://skyline.ms/wiki/home/software/Skyline/page.view?name=tutorial_custom_reports

If you want to compare protein or peptide abundances between different groups of replicates (i.e. cohorts) then you should look at the Group Comparison tutorial:
https://skyline.ms/wiki/home/software/Skyline/page.view?name=tutorial_grouped

-- Nick
 
Dhaval Patel responded:  2025-03-31 18:47
Hi Nick,
Thanks for your quick and constructive feedback. I have a few follow-up questions/comments to ensure I fully understand your recommendations:

1. So do you recommend having similar light and heavy sets of transitions for peptide quant? I just refined the method and reduced the IS transitions to just one transition to decrease the overall MRMs which helps increase the scan time (Dwell Time) for each transition for better chromatography.
2. I understand your point about surrogate standards.
3. I'm trying to determine the best approach for normalization. If normalizing first with global standard followed by heavy area doesn't make sense, how 4. would you recommend using the BSA exogenous standard protein data for sample processing normalization (i.e., trypsin digestion) alongside heavy area data for instrument variation normalization? Any specific suggestions?
4. Thank you for directing me to the Document grid to check which transitions are being used for calculations.

I appreciate your guidance on these technical aspects.

Thank you
Dhaval
 
Nick Shulman responded:  2025-03-31 20:04
Some of your internal standards will be used for quality control, but not normalization.
Your BSA protein is probably in there so that you can be sure that the digestion step worked correctly. However, if something went wrong in the digestion step, you might see that you have the completely wrong amount of signal from the BSA peptides, but you probably would not have enough information to be able to correct the problem by multiplying by anything.

If you want to figure out which is the best normalization method, you should collect data for multiple technical replicates, and also collect data for a sample which has been diluted by a known amount.
The best normalization method will minimize the CV of measured values across the technical replicates and will also yield the correct ratio when comparing the diluted to undiluted samples.

-- Nick