My collaborator had some libraries sequenced much more deeply after the initial results seemed too shallow, but the total read counts per library were terribly imbalanced because the sequencing center pooled them based on Picogreen rather than qPCR. As usual, I checked the % duplication in each library (according to Picard MarkDuplicates). But then I noticed that one outlier with especially high depth also had very high duplication, and that got me thinking.
Read duplication is a function of sequencing depth. If you have 10 molecules in your library and sequence 100 reads, at least 90% of your reads will be duplicates; if you sequence 1000, at least 99%. And as we've discussed before, you expect more duplication from targeted, specific protocols like RNA- and ChIP-seq than whole-genome sequencing, which means you can't really compare the duplicate counts between them and it may even be a bad idea to remove duplicates (if you don't have a way to distinguish PCR duplicates from "true" fragmentation duplicates).
As expected, sequencing the exact same library more deeply gave it a higher % duplication the second time. However, I noticed Picard's output also contains an estimate of the total number of molecules in the original library (before amplification). This is apparently derived from the "Lander-Waterman equation" - certainly a couple of trustworthy names! Anyway, whatever the actual accuracy about the number of molecules, it does seem much more robust to sequencing depth:
The extra "_L1/L2/L3" at the end of some library names indicates the same library was sequenced in more than one HiSeq lane. The libraries that got sequenced again more deeply are highlighted, along with the outlier that originally got my attention - if you look at the estimated library size instead of the % duplication, it's only slightly below the median for that ChIP antibody. Apparently the high % duplication was a false alarm due to sequencing very deeply. The resequenced libraries greatly increased their % duplication with more depth, but the estimated library sizes didn't change much. And as expected based on what they're trying to read, worse antibodies gave lower estimated library sizes, while input control gave the highest, and RNA-seq the lowest.
In fact, you can even use Picard's equation to predict the % duplication of a "perfect" RNA-seq library that had 350 million unique fragments (consistent with our data for a mammal), as a function of sequencing depth:
Most of us don't work at these scales so the effect might go unnoticed, but it's worth using depth-independent metrics as the technology keeps improving (or in case your multiplexing is imbalanced like ours).
Based on these results I'm thinking about ignoring % duplication and looking at estimated library size as my quality metric instead. What do other people think about it?
Read duplication is a function of sequencing depth. If you have 10 molecules in your library and sequence 100 reads, at least 90% of your reads will be duplicates; if you sequence 1000, at least 99%. And as we've discussed before, you expect more duplication from targeted, specific protocols like RNA- and ChIP-seq than whole-genome sequencing, which means you can't really compare the duplicate counts between them and it may even be a bad idea to remove duplicates (if you don't have a way to distinguish PCR duplicates from "true" fragmentation duplicates).
As expected, sequencing the exact same library more deeply gave it a higher % duplication the second time. However, I noticed Picard's output also contains an estimate of the total number of molecules in the original library (before amplification). This is apparently derived from the "Lander-Waterman equation" - certainly a couple of trustworthy names! Anyway, whatever the actual accuracy about the number of molecules, it does seem much more robust to sequencing depth:
The extra "_L1/L2/L3" at the end of some library names indicates the same library was sequenced in more than one HiSeq lane. The libraries that got sequenced again more deeply are highlighted, along with the outlier that originally got my attention - if you look at the estimated library size instead of the % duplication, it's only slightly below the median for that ChIP antibody. Apparently the high % duplication was a false alarm due to sequencing very deeply. The resequenced libraries greatly increased their % duplication with more depth, but the estimated library sizes didn't change much. And as expected based on what they're trying to read, worse antibodies gave lower estimated library sizes, while input control gave the highest, and RNA-seq the lowest.
In fact, you can even use Picard's equation to predict the % duplication of a "perfect" RNA-seq library that had 350 million unique fragments (consistent with our data for a mammal), as a function of sequencing depth:
Most of us don't work at these scales so the effect might go unnoticed, but it's worth using depth-independent metrics as the technology keeps improving (or in case your multiplexing is imbalanced like ours).
Based on these results I'm thinking about ignoring % duplication and looking at estimated library size as my quality metric instead. What do other people think about it?
Comment