View Single Post
Old 02-21-2012, 05:57 AM   #25
Location: NC

Join Date: Mar 2010
Posts: 15

Originally Posted by joss211 View Post
It seems to me like the acceptability of a 4% error rate would depend on the sample type. One advantage of sequencing clusters (or beads) is that each read is a pretty accurate determination of sequence derived from a single template. I am a bit of a statistics moron, but it seems like if your starting material is impure (e.g. a tumor sample), that it would be easier to distinguish normal sequence from minority-contributor sequence (say 5%) if you are 99.9% sure of each base in a read than if you are 96% sure of each base in a read. While 200x coverage might be sufficient for the former, would it be sufficient for the latter?
That's one of the reasons why 4% error rate is not acceptable in the clinic, for guiding treatment/dosing and inclusion/exclusion criteria. It will be acceptable for lots of other things, all R&D oriented, in academia and private sector.
larissa is offline   Reply With Quote