View Single Post
Old 04-24-2012, 11:49 PM   #13
nickloman
Senior Member
 
Location: Birmingham, UK

Join Date: Jul 2009
Posts: 356
Default

Quote:
Originally Posted by pmiguel View Post
About Nick's paper. Anyone else think that the MiSeq got sandbagged? Especially in comparison to the GS-Jr. Only 10% of a MiSeq run was used for the assembly. (And, mysteriously, 1/2 of that ended up mapping to one of the plasmids.) With ABySS I have seen improvements in contig/scaffold lengths up to 100x coverage.

--
Phillip
had

Hi Philip,

A fair point!

The MiSeq run we had multiplexed seven E. coli strains on a single run, and the comparison strain 280 came out about 15% of the run, pretty much what was expected. And yes, we had very high plasmid coverage (we hypothesise why in the paper) so we lost out on chromosomal coverage a bit.

But yes I think you are right, if we'd had more coverage we may have got better MiSeq assemblies. But I was still quite impressed with their performance and you saw they were the most effective at reconstructing gene space. Note we were able to use MIRA on all the data which is an Overlap-Layout-Consensus assembler and doesn't need quite as high coverage as de Bruijn graphs to work well (indeed too high is a problem).

Part of the idea of the paper was to be practical, indeed these data were used for outbreak epidemiology at the time. One E. coli strain per two 454 Jr runs, one strain per 316 chip and 7 strains per MiSeq run is the kind of experimental design you would envisage in a real lab.

Hope that's helpful

Nick
nickloman is offline   Reply With Quote