SEQanswers

Go Back   SEQanswers > Applications Forums > RNA Sequencing



Similar Threads
Thread Thread Starter Forum Replies Last Post
DSN normalization henry.wood Sample Prep / Library Generation 3 11-19-2014 12:36 AM
Amplicon normalization RCJK Sample Prep / Library Generation 3 06-20-2013 05:55 AM
microRNA normalization kenosaki Bioinformatics 2 08-29-2011 08:15 PM
PCR product normalization Palecomic Sample Prep / Library Generation 0 01-20-2011 01:50 AM
cDNA normalization Triticum SOLiD 0 09-15-2009 12:56 AM

Reply
 
Thread Tools
Old 11-10-2010, 01:59 PM   #1
shurjo
Senior Member
 
Location: Rockville, MD

Join Date: Jan 2009
Posts: 126
Default denominator for normalization

I'm hoping for some input from my statistically gifted brethren on this one:

I have sixteen RNA-Seq libraries which were aligned with TopHat. Counts of reads mapping to RefSeq genes were generated with htseq-count. My statistician collaborators need to normalize these counts for differences in sequencing depth. Here are my choices for the denominator:
  1. Total number of reads in the raw data (wc -l on the file from the sequencer)
  2. Total number of lines in the TopHat SAM file (wc -l on accepted_hits.sam)
  3. Number of unique reads for which TopHat found at least one location to assign (sort | uniq | wc -l on sequence field from SAM file)
  4. Sum of counts across all genes within each library
Does anyone have some feedback on this? The range of numbers for choice 1 above is 96396160-131352500.

Any help will be much appreciated,

Thanks,

Shurjo
shurjo is offline   Reply With Quote
Old 11-10-2010, 11:27 PM   #2
Simon Anders
Senior Member
 
Location: Heidelberg, Germany

Join Date: Feb 2010
Posts: 994
Default

All of these suffer from the issue that a few strongly and differentially expressed genes can skew them. See the discussion in our paper and especially in Oshlack and Robinson's paper.

Our DESeq package offers (via its function 'estimateSizeFactors') a simple way to get a robust number for the denominator, which is explained, e.g., here.

Simon
Simon Anders is offline   Reply With Quote
Old 11-11-2010, 04:32 PM   #3
shurjo
Senior Member
 
Location: Rockville, MD

Join Date: Jan 2009
Posts: 126
Default

Quote:
Originally Posted by Simon Anders View Post
All of these suffer from the issue that a few strongly and differentially expressed genes can skew them. See the discussion in our paper and especially in Oshlack and Robinson's paper.

Our DESeq package offers (via its function 'estimateSizeFactors') a simple way to get a robust number for the denominator, which is explained, e.g., here.

Simon
Hi Simon,

Many thanks for your reply. I read both your and the Oshlack papers and agree with all of the points you make therein. However, in the context of my data, the following points would suggest to me that a simpler normalization strategy may be adequate:
  1. The sixteen libraries I referred to all come from the same tissue source (lymphoblastoid cell lines)
  2. This is a clinical study where the cells were not "induced" or "perturbed" with an external agent, so there is no expectation that a large number of genes will be differentially expressed between the two groups of 8.
  3. A priori, the chances of there being an appreciable number of transcripts that are present in one or a few of these libraries but absent in the others is low.
I understand that using TMM will be better in the vast majority of data sets. However, my objective here is simply to answer a question from my collaborating statisticians (who will not be using either edgeR or DESeq, but their own tests) as to what makes the best denominator for normalizing libraries for differences in coverage. Given this scenario, do you have any suggestions?

Once again, thanks for your help and congratulations on your paper.

Shurjo
shurjo is offline   Reply With Quote
Old 09-03-2012, 03:35 PM   #4
carmeyeii
Senior Member
 
Location: Mexico

Join Date: Mar 2011
Posts: 137
Default

Hi Shurjo,

I have been having a tough time thinking this one out as well. I would appreciate any insight you may have gained by solving this problem. I too am torn between using the htseq count total, the unique mapped reads from tophat or all the alignments generated by tophat.

Thanks for your help,

Carmen
carmeyeii is offline   Reply With Quote
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off




All times are GMT -8. The time now is 06:16 AM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2020, vBulletin Solutions, Inc.
Single Sign On provided by vBSSO