View Single Post
Old 04-25-2012, 04:13 AM   #14
pmiguel
Senior Member
 
Location: Purdue University, West Lafayette, Indiana

Join Date: Aug 2008
Posts: 2,315
Default

Quote:
Originally Posted by nickloman View Post

Part of the idea of the paper was to be practical, indeed these data were used for outbreak epidemiology at the time. One E. coli strain per two 454 Jr runs, one strain per 316 chip and 7 strains per MiSeq run is the kind of experimental design you would envisage in a real lab.
Not to back away from what I wrote -- it looks from my vantage point like you and your colleagues let your sense of fair play prevent the 454 Jr from getting trounced. That said, I could second-guess your experimental design all day, but you and your colleagues actually did the experiment. I am just sitting here taking pot shots. Still, in case you do another one, my (unsolicited) input: I personally would like to see a dollar-for-dollar constrained test and an hour-for-hour constrained test of these instrument systems.

If I could switch the topic somewhat. In figure S2, in this pdf, it looks like the Torrent and the MiSeq both slightly undervalue the quality of their base calls for most quality values and the 454 Jr slightly overvalues the quality of their base calls. Am I reading the chart correctly? I knew the Torrent tended towards "modesty" in these matters. But I had the vague impression that Illumina tended towards "bluster".

That is, it looks to me from the chart that the MiSeq bin of bases with a quality score of 20 (1 error in 100) actually had 1 error in 1000. Indicating that they really were Q30 bases, rather than Q20 bases?

--
Phillip

Last edited by pmiguel; 04-25-2012 at 04:20 AM. Reason: Added specific quality value example.
pmiguel is offline   Reply With Quote