![]() |
|
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
Introducing BBNorm, a read normalization and error-correction tool | Brian Bushnell | Bioinformatics | 53 | 08-12-2020 01:51 PM |
Introducing BBSplit: Read Binning Tool for Metagenomes and Contaminated Libraries | Brian Bushnell | Bioinformatics | 64 | 03-28-2020 04:54 AM |
assembly improvement based on gene set - is any tool for this? | paraslonic | Bioinformatics | 0 | 01-26-2015 09:45 AM |
![]() |
|
Thread Tools |
![]() |
#1 |
Super Moderator
Location: Walnut Creek, CA Join Date: Jan 2014
Posts: 2,707
|
![]()
I'd like to introduce a new BBTool, KmerCompressor. This will take a dataset and reduce it to its set of constituent kmers, and print an optimally-condensed representation of them in fasta format, in which each kmer occurs exactly once. This is similar to an assembler, but it has additional capabilities regarding kmer count cutoffs that allow it to be used to perform arbitrary set operations on kmers, which allows advanced filtering of raw reads to capture specific features such as ribosomes, mitochondria, and chloroplasts, or filter by taxonomy.
The basic usage is like this: kcompress.sh in=reads.fq out=set.fa To get just the 31-mers that appear between 100 and 150 times in a dataset: kcompress.sh in=reads.fq out=set.fa min=100 max=150 k=31 To use it for a set union (all the kmers in either of two files): kcompress.sh in=ecoli.fa,salmonella.fa out=union.fa With those basic operations, it is now possible to do various set operations. For example: kcompress.sh in=fungal_genome.fa out=set_g.fa kcompress.sh in=fungal_mitochondria.fa out=set_m.fa Each of those sets has each kmer represented exactly once. Therefore, you can perform an intersection like this: kcompress.sh in=set_g.fa,set_m.fa out=intersection.fa min=2 Or a subtraction like this: kcompress.sh in=set_m.fa,intersection.fa out=m_minus_g.fa max=1 Then m_minus_g.fa contains all the kmers that are specific only to mitochondria in that organism, and could be used for filtering reads in an iterative assembly process. I've been recently using it to create a set of ribosomal kmers for rapid metatranscriptome rRNA filtering using BBDuk, by reducing a very large ribosomal (16S/18S) database to just the set of kmers that occur often (and are thus both correct and conserved). This is useful for avoiding false positives, and reducing load time and memory usage compared to working with the entire database. For example: dedupe.sh in=multiple_ribo_databases.fa.gz out=nodupes.fa.gz kcompress.sh in=nodupes.fa.gz out=compressed.fa.gz k=31 min=5 ...will result in a much smaller file, with similar (tunable) sensitivity and better specificity compared to the original. Subsequently, I run: bbduk.sh in=metatranscriptome.fq.gz outu=nonribo.fq.gz outm=ribo.fq.gz ref=compressed.fa.gz k=31 ...to separate the reads. P.S. A link to a file I created with KmerCompressor: ribokmers.fa.gz This 9MB file contains commonly-occurring ribosomal kmers from Silva. Used in conjunction with BBDuk, like this: bbduk.sh in=reads.fq outm=ribo.fq outu=nonribo.fq k=31 ref=ribokmers.fa.gz ...it has a roughly 99.94% sensitivity against synthetic 1x150bp from the full Silva database (180MB compressed), a 99.98% sensitivity with hdist=1, and 99.994% sensitivity at k=25 hdist=1. Last edited by Brian Bushnell; 10-06-2015 at 05:49 PM. |
![]() |
![]() |
![]() |
#2 |
Senior Member
Location: East Coast USA Join Date: Feb 2008
Posts: 7,087
|
![]()
What is the upper limit on the k-mer size one can specify?
|
![]() |
![]() |
![]() |
#3 |
Super Moderator
Location: Walnut Creek, CA Join Date: Jan 2014
Posts: 2,707
|
![]()
It's currently capped at 31, though I could make an unlimited-kmer-length version in a few hours. That would probably be worth doing, if I get some free time.
|
![]() |
![]() |
![]() |
#4 |
Senior Member
Location: US Join Date: Dec 2010
Posts: 453
|
![]()
Very neat! Time to start playing.
|
![]() |
![]() |
![]() |
Tags |
bbduk, bbmap, bbtools, filtering, kcompress, kmercompressor |
Thread Tools | |
|
|