SEQanswers

Go Back   SEQanswers > Bioinformatics > Bioinformatics



Similar Threads
Thread Thread Starter Forum Replies Last Post
Excessive spam from multiple accounts based out of Korea RamakrishnanRS General 5 05-11-2015 08:07 AM
Stringency while doing alligment alrains RNA Sequencing 0 11-09-2014 08:24 AM
aligners filtering out reads with excessive read depth? efoss Bioinformatics 2 10-01-2013 12:19 PM

Reply
 
Thread Tools
Old 12-12-2019, 08:46 PM   #1
adamrork
Junior Member
 
Location: USA

Join Date: May 2018
Posts: 3
Default Trimming stringency w/ excessive data

I'm in the process of assembling a 500Mb animal genome and have both Illumina and PacBio data to work with using a hybrid assembly approach. Generally when I assemble de novo transcriptomes or genomes, I try to not be too "aggressive" with my quality trimming parameters for raw Illumina reads, running something along the lines of:

Code:
bbduk.sh in=file1.fq in2=file2.fq out=trimmed.fq qtrim=rl trimq=10 (plus some adapter trimming parameters)
However, we sequenced our Illumina libraries incredibly deep and have something on the order of 320x coverage untrimmed and ~270x coverage trimmed using the parameters above (plus some adapter trimming). This is still about twice as much coverage as I would ever generally use for genome assembly. Right now I'm subsetting my trimmed reads to about 100x coverage for hybrid assembly with my PacBio data. I was wondering what folks think about making my quality trimming parameters more stringent given that I have so much excess data to have a higher-quality set of reads on average. For example,

Code:
bbduk.sh in=file1.fq in2=file2.fq out=trimmed.fq qtrim=rl trimq=15
This brings me down to a little over 200x coverage, which I would still subset to 100x coverage-worth of reads from using reformat.sh. I know that this isn't generally recommended, but I've never been quite sure if this is because of lost coverage (which wouldn't be an issue here) or because of something inherently unusual about how higher-quality reads are handled by assemblers. Thus far, my assembly results using the "qtrim=rl trimq=10" parameters seem reasonable, I'm mostly just curious.

Last edited by adamrork; 12-12-2019 at 08:49 PM.
adamrork is offline   Reply With Quote
Old 12-13-2019, 03:58 AM   #2
GenoMax
Senior Member
 
Location: East Coast USA

Join Date: Feb 2008
Posts: 7,016
Default

Rather than doing the filtering you should consider normalizing your data. For this BBMap has another program called bbnorm.sh. You can find a guide here.
GenoMax is offline   Reply With Quote
Old 12-13-2019, 10:46 AM   #3
adamrork
Junior Member
 
Location: USA

Join Date: May 2018
Posts: 3
Default

Quote:
Originally Posted by GenoMax View Post
Rather than doing the filtering you should consider normalizing your data. For this BBMap has another program called bbnorm.sh. You can find a guide here.
Ah, great point! I'll do that. Thank you!
adamrork is offline   Reply With Quote
Reply

Tags
bbduk, genome assembly, illumina reads, trimming

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off




All times are GMT -8. The time now is 07:22 PM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2020, vBulletin Solutions, Inc.
Single Sign On provided by vBSSO