SEQanswers

Go Back   SEQanswers > Applications Forums > Metagenomics



Similar Threads
Thread Thread Starter Forum Replies Last Post
Advice on assembling very large metagenomic dataset? jol.espinoz Metagenomics 1 03-01-2017 02:47 PM
What information can I extract from a large dataset of a single gene TinySci Genomic Resequencing 3 05-14-2015 09:55 AM
SOS, Tophat is too slow for a large dataset Mark.hz Bioinformatics 18 03-14-2012 12:18 PM
How would you BLAST a large de novo dataset to NCBI? grassgirl Bioinformatics 2 06-06-2011 04:10 PM

Reply
 
Thread Tools
Old 07-06-2017, 03:26 AM   #1
lordjonwald
Junior Member
 
Location: Belfast

Join Date: Oct 2014
Posts: 5
Question Processing a very large soil dataset

I致e been tasked with assembling a very large soil metagenomic dataset, so large that I知 having second thoughts about my regular processing pipeline, and was hoping some of you fine folk might have some advice:

I have:

15 high depth samples, totalling around 4.5 billion PE reads at 2x150

45 lower depth samples, ~725 million PE reads at 2x150

all were sequenced on a hiseq 2500

I should also mention that the end goal here is to assemble as many near-complete genomes as possible for mapping of metatranscriptomic reads from the same samples

ordinarily my pipeline would look something like:
  1. adapter and quality trim to q ≥10 using bbduk
  2. merge reads with bbmerge, if merge rate is low then proceed with PE reads only
  3. concatenate files and normalise to coverage of 100 with bbnorm, remove low depth kmers ≤ 5
  4. co-assemble all samples using megahit and / or spades
  5. Map raw reads to assembly with bbmap
  6. downstream analysis, annotation/binning etc.

I知 currently at step 2, and finding that only ~30% of the reads can be merged, so I知 currently thinking that I should just proceed with the unmerged PE reads. Where I知 really struggling is deciding whether or not co-assembly is a viable option in this case, given the size of the dataset. I知 finding that bbnorm is taking a very long time to run even on a single file, near 24 hours with 16 threads and 400gb RAM, using the following settings:
bbnorm.sh in=infile.fastq.gz out=normalised.fastq.gz hist=hist.txt prefilter=t mindepth=5 target=100 threads=16
Loglog.sh tells me that there are ~15 billion unique kmers per sample for the high depth samples, and ~2.5 billion for the low depth samples

So my main question is: would it make more sense for me to assemble the files individually, concatenate the results and deduplicate using dedupe.sh or something similar? My major concern is that I値l attempt to co-assemble all the reads only to have it crash out after taking up several days on our cluster

Does anyone have any other time saving tips/advice for processing and assembly of large datasets?

I've never worked on a dataset this large before so apologies if I'm missing something obvious here!

Many thanks
lordjonwald is offline   Reply With Quote
Reply

Tags
assembly, hiseq 2500, metagenome assembly, normalisation, soil

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off




All times are GMT -8. The time now is 12:47 AM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2017, vBulletin Solutions, Inc.
Single Sign On provided by vBSSO