Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Processing a very large soil dataset

    I’ve been tasked with assembling a very large soil metagenomic dataset, so large that I’m having second thoughts about my regular processing pipeline, and was hoping some of you fine folk might have some advice:

    I have:

    15 high depth samples, totalling around 4.5 billion PE reads at 2x150

    45 lower depth samples, ~725 million PE reads at 2x150

    all were sequenced on a hiseq 2500

    I should also mention that the end goal here is to assemble as many near-complete genomes as possible for mapping of metatranscriptomic reads from the same samples

    ordinarily my pipeline would look something like:
    1. adapter and quality trim to q ≥10 using bbduk
    2. merge reads with bbmerge, if merge rate is low then proceed with PE reads only
    3. concatenate files and normalise to coverage of 100 with bbnorm, remove low depth kmers ≤ 5
    4. co-assemble all samples using megahit and / or spades
    5. Map raw reads to assembly with bbmap
    6. downstream analysis, annotation/binning etc.


    I’m currently at step 2, and finding that only ~30% of the reads can be merged, so I’m currently thinking that I should just proceed with the unmerged PE reads. Where I’m really struggling is deciding whether or not co-assembly is a viable option in this case, given the size of the dataset. I’m finding that bbnorm is taking a very long time to run even on a single file, near 24 hours with 16 threads and 400gb RAM, using the following settings:
    bbnorm.sh in=infile.fastq.gz out=normalised.fastq.gz hist=hist.txt prefilter=t mindepth=5 target=100 threads=16

    Loglog.sh tells me that there are ~15 billion unique kmers per sample for the high depth samples, and ~2.5 billion for the low depth samples

    So my main question is: would it make more sense for me to assemble the files individually, concatenate the results and deduplicate using dedupe.sh or something similar? My major concern is that I’ll attempt to co-assemble all the reads only to have it crash out after taking up several days on our cluster

    Does anyone have any other time saving tips/advice for processing and assembly of large datasets?

    I've never worked on a dataset this large before so apologies if I'm missing something obvious here!

    Many thanks

Latest Articles

Collapse

  • seqadmin
    Techniques and Challenges in Conservation Genomics
    by seqadmin



    The field of conservation genomics centers on applying genomics technologies in support of conservation efforts and the preservation of biodiversity. This article features interviews with two researchers who showcase their innovative work and highlight the current state and future of conservation genomics.

    Avian Conservation
    Matthew DeSaix, a recent doctoral graduate from Kristen Ruegg’s lab at The University of Colorado, shared that most of his research...
    03-08-2024, 10:41 AM
  • seqadmin
    The Impact of AI in Genomic Medicine
    by seqadmin



    Artificial intelligence (AI) has evolved from a futuristic vision to a mainstream technology, highlighted by the introduction of tools like OpenAI's ChatGPT and Google's Gemini. In recent years, AI has become increasingly integrated into the field of genomics. This integration has enabled new scientific discoveries while simultaneously raising important ethical questions1. Interviews with two researchers at the center of this intersection provide insightful perspectives into...
    02-26-2024, 02:07 PM

ad_right_rmr

Collapse

News

Collapse

Topics Statistics Last Post
Started by seqadmin, 03-14-2024, 06:13 AM
0 responses
34 views
0 likes
Last Post seqadmin  
Started by seqadmin, 03-08-2024, 08:03 AM
0 responses
72 views
0 likes
Last Post seqadmin  
Started by seqadmin, 03-07-2024, 08:13 AM
0 responses
81 views
0 likes
Last Post seqadmin  
Started by seqadmin, 03-06-2024, 09:51 AM
0 responses
68 views
0 likes
Last Post seqadmin  
Working...
X