SEQanswers

Go Back   SEQanswers > Bioinformatics > Bioinformatics



Similar Threads
Thread Thread Starter Forum Replies Last Post
HTSeq, human genomes and low read counts: am I doing anything wrong? ExMachina Bioinformatics 12 07-31-2014 11:22 AM
what is wrong with this phylip-file? (Can't be read by fpars) someperson Bioinformatics 1 07-21-2014 05:15 AM
Miseq % Aligned Metric SeqerK Illumina/Solexa 3 01-07-2013 06:41 AM
Transcriptome similarity metric? ucpete RNA Sequencing 6 12-05-2012 10:13 AM
Picard MergeSamFiles resulting in read duplication and loss of pair information canisirius Bioinformatics 0 10-01-2012 08:46 AM

Reply
 
Thread Tools
Old 01-19-2015, 09:43 AM   #1
jwfoley
Senior Member
 
Location: Stanford

Join Date: Jun 2009
Posts: 181
Default Is read duplication the wrong metric?

My collaborator had some libraries sequenced much more deeply after the initial results seemed too shallow, but the total read counts per library were terribly imbalanced because the sequencing center pooled them based on Picogreen rather than qPCR. As usual, I checked the % duplication in each library (according to Picard MarkDuplicates). But then I noticed that one outlier with especially high depth also had very high duplication, and that got me thinking.

Read duplication is a function of sequencing depth. If you have 10 molecules in your library and sequence 100 reads, at least 90% of your reads will be duplicates; if you sequence 1000, at least 99%. And as we've discussed before, you expect more duplication from targeted, specific protocols like RNA- and ChIP-seq than whole-genome sequencing, which means you can't really compare the duplicate counts between them and it may even be a bad idea to remove duplicates (if you don't have a way to distinguish PCR duplicates from "true" fragmentation duplicates).

As expected, sequencing the exact same library more deeply gave it a higher % duplication the second time. However, I noticed Picard's output also contains an estimate of the total number of molecules in the original library (before amplification). This is apparently derived from the "Lander-Waterman equation" - certainly a couple of trustworthy names! Anyway, whatever the actual accuracy about the number of molecules, it does seem much more robust to sequencing depth:

The extra "_L1/L2/L3" at the end of some library names indicates the same library was sequenced in more than one HiSeq lane. The libraries that got sequenced again more deeply are highlighted, along with the outlier that originally got my attention - if you look at the estimated library size instead of the % duplication, it's only slightly below the median for that ChIP antibody. Apparently the high % duplication was a false alarm due to sequencing very deeply. The resequenced libraries greatly increased their % duplication with more depth, but the estimated library sizes didn't change much. And as expected based on what they're trying to read, worse antibodies gave lower estimated library sizes, while input control gave the highest, and RNA-seq the lowest.

In fact, you can even use Picard's equation to predict the % duplication of a "perfect" RNA-seq library that had 350 million unique fragments (consistent with our data for a mammal), as a function of sequencing depth:

Most of us don't work at these scales so the effect might go unnoticed, but it's worth using depth-independent metrics as the technology keeps improving (or in case your multiplexing is imbalanced like ours).

Based on these results I'm thinking about ignoring % duplication and looking at estimated library size as my quality metric instead. What do other people think about it?

Last edited by jwfoley; 01-19-2015 at 09:53 AM.
jwfoley is offline   Reply With Quote
Old 01-20-2015, 01:30 AM   #2
Bukowski
Senior Member
 
Location: UK

Join Date: Jan 2010
Posts: 390
Default

So I think we've been quite happy using %duplication as a QC metric for our exomes for some time. This is one metric that actually has a good correlation to the quality of the input DNA in our hands, so FFPE samples or low input samples tend to have higher duplication rates. When you're sequencing everything to roughly the same depth it's a good comparator.

However when we're doing high-depth targeted re-sequencing it becomes less useful and we think that the estimated library size is more useful in this application, especially when we've titred lower inputs of FFPE/fresh samples through capture experiments.
Bukowski is offline   Reply With Quote
Reply

Tags
duplication, picard, picard markduplicates

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off




All times are GMT -8. The time now is 01:42 AM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2020, vBulletin Solutions, Inc.
Single Sign On provided by vBSSO