I’ve been tasked with assembling a very large soil metagenomic dataset, so large that I’m having second thoughts about my regular processing pipeline, and was hoping some of you fine folk might have some advice:
I have:
15 high depth samples, totalling around 4.5 billion PE reads at 2x150
45 lower depth samples, ~725 million PE reads at 2x150
all were sequenced on a hiseq 2500
I should also mention that the end goal here is to assemble as many near-complete genomes as possible for mapping of metatranscriptomic reads from the same samples
ordinarily my pipeline would look something like:
I’m currently at step 2, and finding that only ~30% of the reads can be merged, so I’m currently thinking that I should just proceed with the unmerged PE reads. Where I’m really struggling is deciding whether or not co-assembly is a viable option in this case, given the size of the dataset. I’m finding that bbnorm is taking a very long time to run even on a single file, near 24 hours with 16 threads and 400gb RAM, using the following settings:
Loglog.sh tells me that there are ~15 billion unique kmers per sample for the high depth samples, and ~2.5 billion for the low depth samples
So my main question is: would it make more sense for me to assemble the files individually, concatenate the results and deduplicate using dedupe.sh or something similar? My major concern is that I’ll attempt to co-assemble all the reads only to have it crash out after taking up several days on our cluster
Does anyone have any other time saving tips/advice for processing and assembly of large datasets?
I've never worked on a dataset this large before so apologies if I'm missing something obvious here!
Many thanks
I have:
15 high depth samples, totalling around 4.5 billion PE reads at 2x150
45 lower depth samples, ~725 million PE reads at 2x150
all were sequenced on a hiseq 2500
I should also mention that the end goal here is to assemble as many near-complete genomes as possible for mapping of metatranscriptomic reads from the same samples
ordinarily my pipeline would look something like:
- adapter and quality trim to q ≥10 using bbduk
- merge reads with bbmerge, if merge rate is low then proceed with PE reads only
- concatenate files and normalise to coverage of 100 with bbnorm, remove low depth kmers ≤ 5
- co-assemble all samples using megahit and / or spades
- Map raw reads to assembly with bbmap
- downstream analysis, annotation/binning etc.
I’m currently at step 2, and finding that only ~30% of the reads can be merged, so I’m currently thinking that I should just proceed with the unmerged PE reads. Where I’m really struggling is deciding whether or not co-assembly is a viable option in this case, given the size of the dataset. I’m finding that bbnorm is taking a very long time to run even on a single file, near 24 hours with 16 threads and 400gb RAM, using the following settings:
bbnorm.sh in=infile.fastq.gz out=normalised.fastq.gz hist=hist.txt prefilter=t mindepth=5 target=100 threads=16
Loglog.sh tells me that there are ~15 billion unique kmers per sample for the high depth samples, and ~2.5 billion for the low depth samples
So my main question is: would it make more sense for me to assemble the files individually, concatenate the results and deduplicate using dedupe.sh or something similar? My major concern is that I’ll attempt to co-assemble all the reads only to have it crash out after taking up several days on our cluster
Does anyone have any other time saving tips/advice for processing and assembly of large datasets?
I've never worked on a dataset this large before so apologies if I'm missing something obvious here!
Many thanks