SEQanswers

Go Back   SEQanswers > Applications Forums > De novo discovery

Similar Threads
Thread Thread Starter Forum Replies Last Post
Genome Res De novo bacterial genome sequencing: millions of very short reads assembly b_seite Literature Watch 1 10-05-2017 12:26 AM
Bacterial Genome Assembly Webinar DNASTAR Vendor Forum 0 11-20-2012 09:05 AM
DNASTARís De Novo Bacterial Genome Assembly App Now Available on BaseSpace DNASTAR Vendor Forum 0 11-13-2012 09:22 AM
n50 value for transcriptome assembly Ramprasad Bioinformatics 0 10-16-2011 11:18 PM
454 paired end bacterial whole genome assembly pmiguel Bioinformatics 15 03-11-2010 05:50 AM

Reply
 
Thread Tools
Old 01-03-2013, 06:08 PM   #1
tonybert
Member
 
Location: seattle

Join Date: Aug 2012
Posts: 38
Default very small N50 values from preliminary assembly of bacterial genome

Greetings, I am trying to assemble a small microbial genome that is approximately 1.6 Mbp in size. We used Illumina HiSeq technology with 72bp paired end reads. As a first pass at assembly, i used the velvet package. As well, i read through a few tutorials on pre-processing of Illumina data.
Downloading and installation was quite simple.

I initially followed nick loman's suggestions for pre-processing sequences.
http://pathogenomics.bham.ac.uk/blog...nome-assembly/

Overall, our read qualities displayed median Q-values of 38, with slight decreases towards the 5' prime ends of the sequences.

However, I found it quite strange that I was not seeing any contigs greater than 60 bp, and the N50 values was around 24. Below are the initial velveth and velvetg commands I used.

tonybert$
velveth run1velveth_01022012/ 31 -fasta -shortPaired COLLAPSED.fasta

tonybert$
velvetg run2velveth_01022012/ -ins_length 300 -exp_cov 227

I assessed the contigs.fa file, and found only sequences of ~60bp. Quite disappointing.

Since then, I have tried using unfiltered reads raw reads, both paired and and single end, and I continually am getting the same results.

As well, i have tried adjusting the kmer length to 21, as well as running the data through with with no insert length estimation or expected coverage.

If anyone has any ideas of what the issues might be, i would sincerely appreciate any comments.
tonybert is offline   Reply With Quote
Old 01-03-2013, 06:31 PM   #2
mchaisso
Member
 
Location: Seattle, WA

Join Date: Apr 2008
Posts: 84
Default

If there were problems with your run, and you have the budget to resequence with pacbio (probably < $1k), it may make more sense to simply send some DNA to Expression Analysis. With the new consensus calling module (quiver), the consensus accuracy can be higher than Sanger.
mchaisso is offline   Reply With Quote
Old 01-03-2013, 06:56 PM   #3
tonybert
Member
 
Location: seattle

Join Date: Aug 2012
Posts: 38
Default

being new to illumina, i don't know how exactly to guide whether the run was quality or not. I assume there must be some files in the output from the sequencing center that would have this. Are there any specific characteristics you look for to assess run quality? To be honest, i thought everything went fine; most of the read positions had really high median Q values, i was under the impression this was OK for assessing run quality.

Thanks again for the comments. -tony
tonybert is offline   Reply With Quote
Old 01-07-2013, 05:41 AM   #4
rchikhi
Member
 
Location: France

Join Date: Jan 2013
Posts: 13
Default

Did you try running Velvetg with "-exp_cov auto" instead of "-exp_cov 227"?
rchikhi is offline   Reply With Quote
Old 01-07-2013, 07:21 AM   #5
mcnelson.phd
Senior Member
 
Location: Connecticut

Join Date: Jul 2011
Posts: 162
Default

Where did you get COLLAPSED.fasta from and are you sure that it's properly formatted (i.e. are the read pairs properly interleaved and read 2 is in the correct orientation)? My initial guess would be that there is a problem with your input file such that reads aren't correctly paired or in the right orientation and that is what's causing your bad assemblies.

My suggestion would be to take your raw read files and run them through without any pre-processing. The pre-processing scripts, if run on read 1 and read 2 separately, most likely resulted in one read of a pair being thrown away at any given step, so when you created your merged file for input into Velvet, the reads were no longer properly paired together, which resulted in a very poor assembly. Not doing any pre-processing will mean that your read pairs stay together correctly and will give you a starting point for what you should expect in terms of # of contigs and N50.

Once you have the raw assembly done, you can mess around with various pre-processing options, taking care to keep the read files properly paired, and then redo your assembly.


And if you think you really need to do more sequencing, don't even think about a PacBio run. If you already have an Illumina library made, then find someone with a MiSeq to do a 2x250 PE run for you.
mcnelson.phd is offline   Reply With Quote
Old 01-07-2013, 08:51 AM   #6
winsettz
Member
 
Location: US

Join Date: Sep 2012
Posts: 91
Default

That said, mcnelson is right in that you should fork your workflow and attempt to use your paired end files without pre-processing in parallel with testing pre-processing. If you're trying to reduce runtime, I've seen decreases in runtime using velveth with -create_binary, and if you're going to be running velvet and tuning your assembly, then any decrease in time and resources consumed will be helpful.

Last edited by winsettz; 01-09-2013 at 11:22 AM.
winsettz is offline   Reply With Quote
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off




All times are GMT -8. The time now is 12:53 AM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2021, vBulletin Solutions, Inc.
Single Sign On provided by vBSSO