Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Is read duplication the wrong metric?

    My collaborator had some libraries sequenced much more deeply after the initial results seemed too shallow, but the total read counts per library were terribly imbalanced because the sequencing center pooled them based on Picogreen rather than qPCR. As usual, I checked the % duplication in each library (according to Picard MarkDuplicates). But then I noticed that one outlier with especially high depth also had very high duplication, and that got me thinking.

    Read duplication is a function of sequencing depth. If you have 10 molecules in your library and sequence 100 reads, at least 90% of your reads will be duplicates; if you sequence 1000, at least 99%. And as we've discussed before, you expect more duplication from targeted, specific protocols like RNA- and ChIP-seq than whole-genome sequencing, which means you can't really compare the duplicate counts between them and it may even be a bad idea to remove duplicates (if you don't have a way to distinguish PCR duplicates from "true" fragmentation duplicates).

    As expected, sequencing the exact same library more deeply gave it a higher % duplication the second time. However, I noticed Picard's output also contains an estimate of the total number of molecules in the original library (before amplification). This is apparently derived from the "Lander-Waterman equation" - certainly a couple of trustworthy names! Anyway, whatever the actual accuracy about the number of molecules, it does seem much more robust to sequencing depth:

    The extra "_L1/L2/L3" at the end of some library names indicates the same library was sequenced in more than one HiSeq lane. The libraries that got sequenced again more deeply are highlighted, along with the outlier that originally got my attention - if you look at the estimated library size instead of the % duplication, it's only slightly below the median for that ChIP antibody. Apparently the high % duplication was a false alarm due to sequencing very deeply. The resequenced libraries greatly increased their % duplication with more depth, but the estimated library sizes didn't change much. And as expected based on what they're trying to read, worse antibodies gave lower estimated library sizes, while input control gave the highest, and RNA-seq the lowest.

    In fact, you can even use Picard's equation to predict the % duplication of a "perfect" RNA-seq library that had 350 million unique fragments (consistent with our data for a mammal), as a function of sequencing depth:

    Most of us don't work at these scales so the effect might go unnoticed, but it's worth using depth-independent metrics as the technology keeps improving (or in case your multiplexing is imbalanced like ours).

    Based on these results I'm thinking about ignoring % duplication and looking at estimated library size as my quality metric instead. What do other people think about it?
    Last edited by jwfoley; 01-19-2015, 10:53 AM.

  • #2
    So I think we've been quite happy using %duplication as a QC metric for our exomes for some time. This is one metric that actually has a good correlation to the quality of the input DNA in our hands, so FFPE samples or low input samples tend to have higher duplication rates. When you're sequencing everything to roughly the same depth it's a good comparator.

    However when we're doing high-depth targeted re-sequencing it becomes less useful and we think that the estimated library size is more useful in this application, especially when we've titred lower inputs of FFPE/fresh samples through capture experiments.

    Comment

    Latest Articles

    Collapse

    • seqadmin
      Essential Discoveries and Tools in Epitranscriptomics
      by seqadmin




      The field of epigenetics has traditionally concentrated more on DNA and how changes like methylation and phosphorylation of histones impact gene expression and regulation. However, our increased understanding of RNA modifications and their importance in cellular processes has led to a rise in epitranscriptomics research. “Epitranscriptomics brings together the concepts of epigenetics and gene expression,” explained Adrien Leger, PhD, Principal Research Scientist...
      04-22-2024, 07:01 AM
    • seqadmin
      Current Approaches to Protein Sequencing
      by seqadmin


      Proteins are often described as the workhorses of the cell, and identifying their sequences is key to understanding their role in biological processes and disease. Currently, the most common technique used to determine protein sequences is mass spectrometry. While still a valuable tool, mass spectrometry faces several limitations and requires a highly experienced scientist familiar with the equipment to operate it. Additionally, other proteomic methods, like affinity assays, are constrained...
      04-04-2024, 04:25 PM

    ad_right_rmr

    Collapse

    News

    Collapse

    Topics Statistics Last Post
    Started by seqadmin, Yesterday, 08:47 AM
    0 responses
    12 views
    0 likes
    Last Post seqadmin  
    Started by seqadmin, 04-11-2024, 12:08 PM
    0 responses
    60 views
    0 likes
    Last Post seqadmin  
    Started by seqadmin, 04-10-2024, 10:19 PM
    0 responses
    59 views
    0 likes
    Last Post seqadmin  
    Started by seqadmin, 04-10-2024, 09:21 AM
    0 responses
    54 views
    0 likes
    Last Post seqadmin  
    Working...
    X