SEQanswers

Go Back   SEQanswers > Bioinformatics > Bioinformatics



Similar Threads
Thread Thread Starter Forum Replies Last Post
Filtering steps for VarScan2 somatic and germline calls ElenaN Bioinformatics 4 08-27-2014 04:57 AM
trimming or filtering first? Seraphya Bioinformatics 6 10-29-2013 06:54 AM
Trimming or filtering the data from Solid anusha Bioinformatics 4 12-19-2012 08:00 AM
Filtering and trimming data salmonella Bioinformatics 10 11-17-2011 05:39 AM
Trimming or filtering the data from Solid anusha SOLiD 1 01-21-2010 09:19 AM

Reply
 
Thread Tools
Old 09-23-2014, 10:55 PM   #1
fahmida
Member
 
Location: Australia

Join Date: Aug 2010
Posts: 54
Default Multiple read QC steps (trimming, filtering etc) in one go ... what's the best way?

Hi,

I know there are many versatile tools (bbduck, trimmomatic etc. to name a a few) that can trim low quality bases, adapters etc. I wonder what would be the best way to do the followings with a single command or pipeline:
1) Adapter/Quality Trimming and Filtering
2) removing reads with greater than 5% Nís
3) removing reads where 20% or more of the calls were considered low quality bases
4) removing duplicated reads
If still I am not asking for too much, perhaps :-)
5) error correcting reads as well!

Thanks.
fahmida is offline   Reply With Quote
Old 09-24-2014, 12:40 AM   #2
Brian Bushnell
Super Moderator
 
Location: Walnut Creek, CA

Join Date: Jan 2014
Posts: 2,707
Default

That's a tall order, considering some of it is per-read and some is dependent on the entirety of your data. There is no tool of which I am aware that can do it all in a single command.

1) BBDuk ("qtrim + trimq" and "ktrim=r + ref" flags). It comes with Truseq and Nextera adapter files.
2) BBDuk ("maxns" flag)
3) BBDuk ("maq" flag - that stands for 'min average quality'). It's also possible to screen by %ID using BBMap, though, if you have a reference. If you want to rely on the sequencer's accuracy estimation, BBDuk's "maq" filters by overall expected error rate, so "maq=10" (phred-scaled) will eliminate reads in which at least 10% of the bases are expected to be incorrect. For 20%, the flag would be "maq=7".

#1-3 can be done in one command by BBDuk. The rest cannot.

4) It's important to note whether you have a reference. This can be done via mapping with various tools, or via matching with tools like Dedupe (which allows inexact matches but is usually less sensitive than mapping to a reference). Dedupe takes pairing into account, but it also uses a substantial amount of memory, for large libraries. Mapping-based tools require a reference, but less memory.
5) Error-correction requires consensus, and is much slower and more subjective than the other operations. There are various tools for this - I recommend BBNorm, with a command like "ecc.sh in=reads.fq out=corrected.fq". But there are other tools, such as Musket. Compared to other tool categories, I would be most worried about error-correction, as it is more subjective and has a greater chance of biasing your results. I do not recommend it except where necessary (such as when you have a huge amount of data, or very high substitution-type error rate, or highly variable coverage, as in amplified single-cell data). BBNorm does not use more memory or go slower as a result of more data.

Some of these functions are also possible in Trimmomatic and Cutadapt. Deduplication, if you have a reference, can also be done by samtools in conjunction with any pair-aware mapping program, using far less memory than Dedupe, though much more time.

Last edited by Brian Bushnell; 09-24-2014 at 12:59 AM.
Brian Bushnell is offline   Reply With Quote
Old 09-24-2014, 01:28 AM   #3
Jegar
Junior Member
 
Location: Cambridge

Join Date: Aug 2014
Posts: 6
Default

I don't have a lot of experience with the software mentioned, but I want to put in my two cents on error correction and quality filtering generally.
These two things are usually considered separate steps in the bioinformatic workflow, but in my mind they need to be considered at the same time, as it is difficult to make an accurate call about error-correction if you've already trimmed off your low quality base calls.

SAMtools will probably solve a lot of your problems when used in conjunction with more specialised modules, depending on whether you have a reference, etc.
Jegar is offline   Reply With Quote
Old 09-24-2014, 03:22 PM   #4
fahmida
Member
 
Location: Australia

Join Date: Aug 2010
Posts: 54
Default

@Brian: I really appreciate your comprehensive reply. I am involved in de novo assembly of moderate sized plant genomes without any close reference. Dealing with massive volumes of data (multiple PE and MP libraries) and each time going through the entire QC steps mentioned above. Removing duplicate doesn't seem to have much impact on the assembly outcome and I am thinking to ignore it for now. so as you suggested 1,2,3 in the first round and 5 in the second seems the way to go for now.
fahmida is offline   Reply With Quote
Old 09-24-2014, 03:27 PM   #5
Brian Bushnell
Super Moderator
 
Location: Walnut Creek, CA

Join Date: Jan 2014
Posts: 2,707
Default

Agreed, that sounds like a good workflow. As for duplicate removal, it's not really relevant unless your data has been PCR-amplified; and it's more useful for re-sequencing/variation-calling than de-novo assembly.
Brian Bushnell is offline   Reply With Quote
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off




All times are GMT -8. The time now is 03:31 PM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2020, vBulletin Solutions, Inc.
Single Sign On provided by vBSSO