SEQanswers

SEQanswers (http://seqanswers.com/forums/index.php)
-   Bioinformatics (http://seqanswers.com/forums/forumdisplay.php?f=18)
-   -   Multiple read QC steps (trimming, filtering etc) in one go ... what's the best way? (http://seqanswers.com/forums/showthread.php?t=46855)

fahmida 09-23-2014 10:55 PM

Multiple read QC steps (trimming, filtering etc) in one go ... what's the best way?
 
Hi,

I know there are many versatile tools (bbduck, trimmomatic etc. to name a a few) that can trim low quality bases, adapters etc. I wonder what would be the best way to do the followings with a single command or pipeline:
1) Adapter/Quality Trimming and Filtering
2) removing reads with greater than 5% Nís
3) removing reads where 20% or more of the calls were considered low quality bases
4) removing duplicated reads
If still I am not asking for too much, perhaps :-)
5) error correcting reads as well!

Thanks.

Brian Bushnell 09-24-2014 12:40 AM

That's a tall order, considering some of it is per-read and some is dependent on the entirety of your data. There is no tool of which I am aware that can do it all in a single command.

1) BBDuk ("qtrim + trimq" and "ktrim=r + ref" flags). It comes with Truseq and Nextera adapter files.
2) BBDuk ("maxns" flag)
3) BBDuk ("maq" flag - that stands for 'min average quality'). It's also possible to screen by %ID using BBMap, though, if you have a reference. If you want to rely on the sequencer's accuracy estimation, BBDuk's "maq" filters by overall expected error rate, so "maq=10" (phred-scaled) will eliminate reads in which at least 10% of the bases are expected to be incorrect. For 20%, the flag would be "maq=7".

#1-3 can be done in one command by BBDuk. The rest cannot.

4) It's important to note whether you have a reference. This can be done via mapping with various tools, or via matching with tools like Dedupe (which allows inexact matches but is usually less sensitive than mapping to a reference). Dedupe takes pairing into account, but it also uses a substantial amount of memory, for large libraries. Mapping-based tools require a reference, but less memory.
5) Error-correction requires consensus, and is much slower and more subjective than the other operations. There are various tools for this - I recommend BBNorm, with a command like "ecc.sh in=reads.fq out=corrected.fq". But there are other tools, such as Musket. Compared to other tool categories, I would be most worried about error-correction, as it is more subjective and has a greater chance of biasing your results. I do not recommend it except where necessary (such as when you have a huge amount of data, or very high substitution-type error rate, or highly variable coverage, as in amplified single-cell data). BBNorm does not use more memory or go slower as a result of more data.

Some of these functions are also possible in Trimmomatic and Cutadapt. Deduplication, if you have a reference, can also be done by samtools in conjunction with any pair-aware mapping program, using far less memory than Dedupe, though much more time.

Jegar 09-24-2014 01:28 AM

I don't have a lot of experience with the software mentioned, but I want to put in my two cents on error correction and quality filtering generally.
These two things are usually considered separate steps in the bioinformatic workflow, but in my mind they need to be considered at the same time, as it is difficult to make an accurate call about error-correction if you've already trimmed off your low quality base calls.

SAMtools will probably solve a lot of your problems when used in conjunction with more specialised modules, depending on whether you have a reference, etc.

fahmida 09-24-2014 03:22 PM

@Brian: I really appreciate your comprehensive reply. I am involved in de novo assembly of moderate sized plant genomes without any close reference. Dealing with massive volumes of data (multiple PE and MP libraries) and each time going through the entire QC steps mentioned above. Removing duplicate doesn't seem to have much impact on the assembly outcome and I am thinking to ignore it for now. so as you suggested 1,2,3 in the first round and 5 in the second seems the way to go for now.

Brian Bushnell 09-24-2014 03:27 PM

Agreed, that sounds like a good workflow. As for duplicate removal, it's not really relevant unless your data has been PCR-amplified; and it's more useful for re-sequencing/variation-calling than de-novo assembly.


All times are GMT -8. The time now is 10:04 AM.

Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2020, vBulletin Solutions, Inc.