Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Processing a very large soil dataset

    I’ve been tasked with assembling a very large soil metagenomic dataset, so large that I’m having second thoughts about my regular processing pipeline, and was hoping some of you fine folk might have some advice:

    I have:

    15 high depth samples, totalling around 4.5 billion PE reads at 2x150

    45 lower depth samples, ~725 million PE reads at 2x150

    all were sequenced on a hiseq 2500

    I should also mention that the end goal here is to assemble as many near-complete genomes as possible for mapping of metatranscriptomic reads from the same samples

    ordinarily my pipeline would look something like:
    1. adapter and quality trim to q ≥10 using bbduk
    2. merge reads with bbmerge, if merge rate is low then proceed with PE reads only
    3. concatenate files and normalise to coverage of 100 with bbnorm, remove low depth kmers ≤ 5
    4. co-assemble all samples using megahit and / or spades
    5. Map raw reads to assembly with bbmap
    6. downstream analysis, annotation/binning etc.


    I’m currently at step 2, and finding that only ~30% of the reads can be merged, so I’m currently thinking that I should just proceed with the unmerged PE reads. Where I’m really struggling is deciding whether or not co-assembly is a viable option in this case, given the size of the dataset. I’m finding that bbnorm is taking a very long time to run even on a single file, near 24 hours with 16 threads and 400gb RAM, using the following settings:
    bbnorm.sh in=infile.fastq.gz out=normalised.fastq.gz hist=hist.txt prefilter=t mindepth=5 target=100 threads=16

    Loglog.sh tells me that there are ~15 billion unique kmers per sample for the high depth samples, and ~2.5 billion for the low depth samples

    So my main question is: would it make more sense for me to assemble the files individually, concatenate the results and deduplicate using dedupe.sh or something similar? My major concern is that I’ll attempt to co-assemble all the reads only to have it crash out after taking up several days on our cluster

    Does anyone have any other time saving tips/advice for processing and assembly of large datasets?

    I've never worked on a dataset this large before so apologies if I'm missing something obvious here!

    Many thanks

Latest Articles

Collapse

  • seqadmin
    Advancing Precision Medicine for Rare Diseases in Children
    by seqadmin




    Many organizations study rare diseases, but few have a mission as impactful as Rady Children’s Institute for Genomic Medicine (RCIGM). “We are all about changing outcomes for children,” explained Dr. Stephen Kingsmore, President and CEO of the group. The institute’s initial goal was to provide rapid diagnoses for critically ill children and shorten their diagnostic odyssey, a term used to describe the long and arduous process it takes patients to obtain an accurate...
    12-16-2024, 07:57 AM
  • seqadmin
    Recent Advances in Sequencing Technologies
    by seqadmin



    Innovations in next-generation sequencing technologies and techniques are driving more precise and comprehensive exploration of complex biological systems. Current advancements include improved accessibility for long-read sequencing and significant progress in single-cell and 3D genomics. This article explores some of the most impactful developments in the field over the past year.

    Long-Read Sequencing
    Long-read sequencing has seen remarkable advancements,...
    12-02-2024, 01:49 PM

ad_right_rmr

Collapse

News

Collapse

Topics Statistics Last Post
Started by seqadmin, 12-17-2024, 10:28 AM
0 responses
33 views
0 likes
Last Post seqadmin  
Started by seqadmin, 12-13-2024, 08:24 AM
0 responses
48 views
0 likes
Last Post seqadmin  
Started by seqadmin, 12-12-2024, 07:41 AM
0 responses
34 views
0 likes
Last Post seqadmin  
Started by seqadmin, 12-11-2024, 07:45 AM
0 responses
46 views
0 likes
Last Post seqadmin  
Working...
X