SEQanswers

Go Back   SEQanswers > Applications Forums > RNA Sequencing



Similar Threads
Thread Thread Starter Forum Replies Last Post
fastq.gz size for human WGS coverage calculation yipukangda Genomic Resequencing 2 11-08-2016 06:26 PM
Calculation of genome sequencing coverage pradeepbe Ion Torrent 10 11-05-2014 04:58 PM
Size of human transcriptome/exome for coverage calculation apratap General 14 03-02-2014 07:08 PM
Coverage Calculation for Whole Genome Sequencing on GA II X ron128 Bioinformatics 3 01-10-2013 12:02 AM
Coverage calculation w/genome ccard28 Bioinformatics 1 09-27-2012 02:00 PM

Reply
 
Thread Tools
Old 09-15-2017, 10:00 AM   #1
LauraGP
Junior Member
 
Location: Havana

Join Date: Sep 2017
Posts: 3
Question Genome size in X coverage calculation for RNASeq experiments

Hello everyone,

Traditionally in sequencing, X coverage is determined as the product of the number of sequenced reads and their length, divided by the genome size.

Regarding RNASeq experiments and using purified poly(A)+ RNA as starting material for library preparation, wouldn't be more accurate to use the product of the number of known genes (as in Ensembl) and tha average transcript size for the given organism, instead of the whole genome size, as the denominator in X coverage calculation?

At the end, because we are working with an mRNA-enriched library for sequencing it is obvious that there will be regions in the genome which we will not cover at all. So maybe we could just exclude them from our X coverage calculation?

Thanks in advanced for any help!
LauraGP is offline   Reply With Quote
Old 09-15-2017, 11:06 AM   #2
kmcarr
Senior Member
 
Location: USA, Midwest

Join Date: May 2008
Posts: 1,136
Default

Quote:
Originally Posted by LauraGP View Post
Hello everyone,

Traditionally in sequencing, X coverage is determined as the product of the number of sequenced reads and their length, divided by the genome size.

Regarding RNASeq experiments and using purified poly(A)+ RNA as starting material for library preparation, wouldn't be more accurate to use the product of the number of known genes (as in Ensembl) and tha average transcript size for the given organism, instead of the whole genome size, as the denominator in X coverage calculation?

At the end, because we are working with an mRNA-enriched library for sequencing it is obvious that there will be regions in the genome which we will not cover at all. So maybe we could just exclude them from our X coverage calculation?

Thanks in advanced for any help!
In one sense that is more accurate, but calculating coverage like this for RNA-Seq experiments is meaningless since it doesn't take into account the wide variation in transcript abundance. Within a single data set for an mRNA library you may find coverage of > 1000X for highly expressed transcripts and < 1X for rare transcripts.

If you are performing a differential expression study then most researchers think in terms of the number of reads (counts) per mRNA library, regardless of read length or total transcriptome length. On the other hand if your goal is de novo transcriptome assembly/discovery then you do need to think in terms of total Gbp of raw sequence data, but you acknowledge that you will sequence the abundant transcripts to a ridiculous depth in order to get enough data so that rarer transcripts gain a reasonable level of coverage for assembly.
kmcarr is offline   Reply With Quote
Old 09-15-2017, 12:01 PM   #3
Brian Bushnell
Super Moderator
 
Location: Walnut Creek, CA

Join Date: Jan 2014
Posts: 2,695
Default

You can calculate coverage from total number of bases sequenced divided by the total transcriptome size. But that's not very useful in organisms (such as eukaryotes) with differential splicing. I don't think it is useful to use calculate average coverage for RNA-seq in a eukaryote. It makes more sense to decide whether you have an adequate number of reads to statistically determine expression.
Brian Bushnell is offline   Reply With Quote
Old 09-18-2017, 08:03 AM   #4
LauraGP
Junior Member
 
Location: Havana

Join Date: Sep 2017
Posts: 3
Default

Hello,

Thanks for your replies!

I believe we are dealing at the end with the dilemma: to increase the number of sequenced reads, that is the sequencing coverage/depth, or to increase the number of biological replicates. Ideally, we would like to increase both, but the budget is not always on our side.

For differential gene expression analysis, our goal is to statistically determine if an observed difference in read counts for each gene, is due to a particular condition or to random variation. In DESeq2 package, the use of shrunken log fold change estimates allows to deal with genes with low read count, basically filtering those which are unlikely to be found after a user-defined threshold for FDR. It seems to me that the identified DE genes after comparison, are just the top of the iceberg as soon as the sequencing depth or the number of sequenced reads are lower.

My main question is then, how to accurately estimate the number of reads we need to sequence per sample, in order to identify at least the 95% of the genes which are altered by our experimental condition?
So far I have just found in the literature previous experimental evidences and recomendations, but not a formal description of how to perform this kind of estimation.
Also, how to determine the variance in sequencing depth (X coverage) across the genome/transcriptome, if not by ultra-deep sequencing approaches? Is there a way to estimate a priori the variance in coverage, for instance analyzing the GC content across the known reference genome?
LauraGP is offline   Reply With Quote
Old 09-18-2017, 11:13 PM   #5
nucacidhunter
Senior Member
 
Location: Iran

Join Date: Jan 2013
Posts: 1,083
Default

As pointed earlier for transcriptome coverage is not applicable as the size of transcriptome is variable and not known. If you have many samples (treatments, replicates) then you can sequence some of libraries deeper and down sample to find the optimum read number. For instance, for human DE 25M read is recommended but you can sequence some to 40M reads and if increasing read number gives better results for your experiment then do a top up sequencing for the rest.
nucacidhunter is offline   Reply With Quote
Old 09-21-2017, 07:22 AM   #6
LauraGP
Junior Member
 
Location: Havana

Join Date: Sep 2017
Posts: 3
Default

Quote:
Originally Posted by nucacidhunter View Post
As pointed earlier for transcriptome coverage is not applicable as the size of transcriptome is variable and not known. If you have many samples (treatments, replicates) then you can sequence some of libraries deeper and down sample to find the optimum read number. For instance, for human DE 25M read is recommended but you can sequence some to 40M reads and if increasing read number gives better results for your experiment then do a top up sequencing for the rest.
Thanks for your suggestions! got it.
LauraGP is offline   Reply With Quote
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off




All times are GMT -8. The time now is 08:26 AM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2017, vBulletin Solutions, Inc.
Single Sign On provided by vBSSO