View Single Post
Old 03-15-2017, 07:59 AM   #4
Senior Member
Location: USA, Midwest

Join Date: May 2008
Posts: 1,177

Originally Posted by cement_head View Post

Is there a "best" way to normalise and pool amplicon samples to acquired even, or balanced read coverage. Otherwise known as getting the same "reads per sample" for each of the pooled samples? We often end up with stochastic coverage despite our best efforts. I'm looking for a better way...

Don't focus as much on "same reads per sample" but more on "enough reads per sample" even if the sample-to-sample count varies.

When dealing with 100's of samples in a typical metagenomic experiment, individually quantifying and normalizing PCR products is unrealistic. Our lab opts for using Invitrogen SequalPrep DNA Normalization plates ( After running your PCR you dilute it in the SequalPrep binding buffer and add it to the plate. The wells are coated with something which binds dsDNA; the binding capacity is limited and your PCR products are theoretically in excess. After a short incubation you remove the excess DNA, wash and elute the bound DNA. Pooling is done as the eluted samples are collected. We do a final cleanup on the pool, QC, quantitate and sequence.

Real life experience -- the amount of DNA recovered per well is not what their spec sheet says it will be and we still may see variation of ~4-5X in the number of reads per sample (though not always this much). Also, these are pricey, about 110 USD per plate (a little over $1.00/sample). Even with these drawbacks it is a very fast and easy method to (somewhat) normalize your metagenomic libraries.
kmcarr is offline   Reply With Quote