Go Back   SEQanswers > Bioinformatics > Bioinformatics

Similar Threads
Thread Thread Starter Forum Replies Last Post
PubMed: Next-generation sequencing of dried blood spot specimens: a novel approach to Newsbot! Literature Watch 0 09-09-2011 02:00 AM
PubMed: Exome Sequencing: The Sweet Spot Before Whole Genomes. Newsbot! Literature Watch 0 08-14-2010 11:27 AM
Can you spot whats wrong with this .gff3? Greg Bioinformatics 0 07-09-2010 02:58 PM

Thread Tools
Old 07-15-2013, 11:53 PM   #1
Junior Member
Location: Santiago

Join Date: Apr 2013
Posts: 7
Default Sweet spot coverage. why?

Hi everybody!

I'm a little confused with the "why" of the existence of a "sweet spot" coverage when assembling a genome.

for example, i have been reading that the sweet spot to 454 dataset is between 60X ~ 80X, and more coverage could result in missassemblies. I thought that as more coverage, lesser error probability in each base.

I imagine there is some kind of distribution related to the error rate of the sequencing technology used. but i'm not clear.

thanks in advance!
fgajardoe is offline   Reply With Quote
Old 07-16-2013, 05:44 AM   #2
Senior Member
Location: Boston area

Join Date: Nov 2007
Posts: 747

There are a number of factors at play here, but it is unsurprising to be confused as most statistical problems do better with more data; assembly is unusual in falling apart.

One issue is with implementation. Every sequencing error creates a spray of new possibilities for the assembler; with many errors, the computational resources to sort these out grows with a worse than linear trend.

Many sequencing errors are not fully random either, so with more coverage the data is more likely to have repetitions of the same systematic error. Greater coverage is also more likely to generate very rare but very troublesome errors, such as chimaeric library members.

Also factoring into this is that at some point more data cannot actually help; the capability of a given technology on that genome will have been exhausted.

I think your guess about error rate is on track, though the class of error probably matters also (insertion/deletion vs. substitution). Error correcting stages upstream of assembly may also benefit from higher coverage that would cause problems for straight assembly.
krobison is offline   Reply With Quote
Old 07-16-2013, 09:28 PM   #3
Junior Member
Location: Santiago

Join Date: Apr 2013
Posts: 7

thanks! it is more clear now!
so, if i have a 454 dataset with 300X coverage. and i subsampled it until 80X and then assemble. is a bad idea to use the (or part of the) excluded reads to extend contigs?

or would be better run several assemblies with 80X coverage datasets, but taking different randomly choosen reads and select the best assembly? (it could be a little time-consuming, i know )
fgajardoe is offline   Reply With Quote

assemble, assembly, coverage, sequencing coverage, sweet spot

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

All times are GMT -8. The time now is 05:59 PM.

Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2019, vBulletin Solutions, Inc.
Single Sign On provided by vBSSO