Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Quality-filtering shotgun metagenomic sequences from environmental samples advice

    Hello all!

    I am analyzing illumina Hiseq4000 - generated paired-end shotgun metagenomic sequences obtained from environmental samples. I am also new to shotgun metagnomic data, but have had experience analyzing 16S data.

    The reads are 150 nt in length and a majority of the fragment sizes range from 280-700 bp. A few samples have fragment sizes ranging from 80- 600 bp.

    I am using the illumina-utils program to quality filter reads before de-novo assembly with the iu-filter-quality-minoche flag (see here for more info: https://github.com/merenlab/illumina-utils).

    So far, approximately 68% of both R1 and R2 pass the QC parameters while 32% fail (94% percent of failures due to R2).

    Here are my questions: Is this error rate and magnitude for read 2 normal?
    Should I quality filter the reads prior to merging some
    of the reads (if only about 20% can be merged)?
    Can I use both merged reads and unmerged R1 and R2
    for de novo assembly using Megahit?

    Thanks for the help!
    Any guidance would be appreciated!

  • #2
    Originally posted by lwebs View Post
    I am using the illumina-utils program to quality filter reads before de-novo assembly with the iu-filter-quality-minoche flag (see here for more info: https://github.com/merenlab/illumina-utils).

    So far, approximately 68% of both R1 and R2 pass the QC parameters while 32% fail (94% percent of failures due to R2).

    Here are my questions: Is this error rate and magnitude for read 2 normal
    That's extremely high. Either you have a failed sequencing run, or your threshold is much too strict. It would be useful to post a quality-score boxplot, though. Anyway, quality-trimming is generally better than filtering, as it both allows you to retain more useful data, and remove more bad data.

    Consulting your link:

    C33: less than 2/3 of bases were Q30 or higher in the first half of the read following the B-tail trimming
    That sounds too aggressive of a threshold for an optimal metagenome assembly; it will result in low genome recovery, and likely, higher fragmentation (though I encourage you to verify this yourself). I'd suggest something more like Q10 trimming of the right end (which you can do with BBDuk flags qtrim=r trimq=10), but the exact value depends on the dataset. Also, since adapter-trimming is universally positive while quality-trimming is more conditionally-positive, I encourage you to adapter-trim the data prior to doing anything else.

    Should I quality filter the reads prior to merging some of the reads (if only about 20% can be merged)?
    I recommend trimming rather than filtering, but I don't recommend either prior to merging. BBMerge, incidentally, can do iterative quality-trimming only for reads that fail to merge without trimming, which improves the merge rate. Blanket quality-trimming all reads prior to merging can increase false-positive merges and reduce the merge rate due to fewer overlapping pairs.

    Also, BBMerge can merge non-overlapping reads, if you have high enough coverage; this is useful in this kind of scenario where only 20% of the reads overlap due to a large average insert size.

    Can I use both merged reads and unmerged R1 and R2 for de novo assembly using Megahit?
    You should always use both merged and unmerged reads for assembly. But in my testing, while merging improves metagenomic assemblies from Spades and Ray, it does not improve them for Megahit, so I don't recommend it as a preprocessing step for Megahit.

    Comment


    • #3
      Thank you for the advice Brian. I am trying out bbtools (bbduk and bbmerge).

      I just got bbduk to run, but now I can't find the output files on my system . . . do I have to have existing directories to accept these files?

      Below is the command I just ran:
      bbduk.sh in1=1_ATGAGGCCAC_L007_R1_001.fastq in2=1_ATGAGGCCAC_L007_R2_001.fastq out1=1_cleanR1.fq out2=1_cleanR2.fq ref=/data/laura/Extracted_Metagenomes/bbmap/resources/adapters.fa ktrim=r k=23 mink=11 hdist=1 tpe tbo

      Comment


      • #4
        The result files should have gone to the directory you ran the command from. Unless there was an error (i.e. you don't have write permission to the directory original data is in).
        Last edited by GenoMax; 05-03-2017, 09:59 AM.

        Comment


        • #5
          The output files should be in your working directory, the same directory as the input files. What do you get when you run "ls *.f*"?

          Comment


          • #6
            Thanks! found them!

            Comment


            • #7
              I am also looking for programs/ scripts that would allow me to combine both the merged and orphaned PE reads into one file to use for assembly via Megahit. Any suggestions?

              I tried to cat the files together and megahit rejected the file with the output 'number of paired-end files not match!'.

              Comment


              • #8
                Don't cat paired and unpaired reads. For Megahit, you need to use the -r flag, like this:

                Code:
                megahit --12 paired.fq -r singletons.fq

                Comment


                • #9
                  Thank you! You have been a tremendous help!

                  Comment

                  Latest Articles

                  Collapse

                  • seqadmin
                    Techniques and Challenges in Conservation Genomics
                    by seqadmin



                    The field of conservation genomics centers on applying genomics technologies in support of conservation efforts and the preservation of biodiversity. This article features interviews with two researchers who showcase their innovative work and highlight the current state and future of conservation genomics.

                    Avian Conservation
                    Matthew DeSaix, a recent doctoral graduate from Kristen Ruegg’s lab at The University of Colorado, shared that most of his research...
                    03-08-2024, 10:41 AM
                  • seqadmin
                    The Impact of AI in Genomic Medicine
                    by seqadmin



                    Artificial intelligence (AI) has evolved from a futuristic vision to a mainstream technology, highlighted by the introduction of tools like OpenAI's ChatGPT and Google's Gemini. In recent years, AI has become increasingly integrated into the field of genomics. This integration has enabled new scientific discoveries while simultaneously raising important ethical questions1. Interviews with two researchers at the center of this intersection provide insightful perspectives into...
                    02-26-2024, 02:07 PM

                  ad_right_rmr

                  Collapse

                  News

                  Collapse

                  Topics Statistics Last Post
                  Started by seqadmin, 03-14-2024, 06:13 AM
                  0 responses
                  32 views
                  0 likes
                  Last Post seqadmin  
                  Started by seqadmin, 03-08-2024, 08:03 AM
                  0 responses
                  71 views
                  0 likes
                  Last Post seqadmin  
                  Started by seqadmin, 03-07-2024, 08:13 AM
                  0 responses
                  80 views
                  0 likes
                  Last Post seqadmin  
                  Started by seqadmin, 03-06-2024, 09:51 AM
                  0 responses
                  68 views
                  0 likes
                  Last Post seqadmin  
                  Working...
                  X