Hi,
I am trying to figure out the best way to trim primers from MiSeq 16S amplicon data with BBduk. I have already filtered phix and trimmed adapters when the fastq file was interleaved. Then I deinterleaved the file and trimmed primers. My basic approach is below. This script is for trimming 515F from the forward reads, and i have a similar approach to trimming 806R from the reverse reads.
bbduk.sh in=SingleFileTest/Deinterleaved_reads/forward_reads.fastq
out=SingleFileTest/BBDUKPrimerTrim/forward_reads_PrimerTrimmed.fastq
literal=GTGCCAGCMGCCGCGGTAA ktrim=l hdist=2 k=19
However, after running this, not all sequences in a file are trimmed. For example, I did this on a fastq file containing data from a single sample; it started with 107179 forward reads, clipped 515F from 104227 reads, and then the resulting file has all 107179 reads with most but not all clipped. I managed to figure out how to isolate some of the unclipped reads from the resulting fastq and it seems that primers on reads that were not clipped tended to have a missing base in their middles. For example, instead of the primer being "GTGCCAGCMGCCGCGGTAA", it was "GTGCCAGCGCCGCGGTAA" and another was "GTGCCGCAGCCGCGGTAA". Here they are aligned just to make the issue more clear.
GTGCCAGCMGCCGCGGTAA
GTGCCAGC GCCGCGGTAA
GTGCC GCAGCCGCGGTAA
I guess it is also possible that there could be primers with extra bases that are also not trimmed, although I have not found any of these yet. Does anyone have any thoughts on how to make sure the clipping catches all these primers with errors, also? I figured out a way to filter the sequences with the remaining primer errors out of the files that all the correct primers were trimmed from. However, that makes it so the matching pair of deinterlieved files has different numbers of relsulting sequences, which is obviously a problem when joining reads. Any help would be greatly appreciated. Also, is there an approach to primer trimming that could be done prior to deinterleaving that I am somehow spacing out on?
Thanks for the help
I am trying to figure out the best way to trim primers from MiSeq 16S amplicon data with BBduk. I have already filtered phix and trimmed adapters when the fastq file was interleaved. Then I deinterleaved the file and trimmed primers. My basic approach is below. This script is for trimming 515F from the forward reads, and i have a similar approach to trimming 806R from the reverse reads.
bbduk.sh in=SingleFileTest/Deinterleaved_reads/forward_reads.fastq
out=SingleFileTest/BBDUKPrimerTrim/forward_reads_PrimerTrimmed.fastq
literal=GTGCCAGCMGCCGCGGTAA ktrim=l hdist=2 k=19
However, after running this, not all sequences in a file are trimmed. For example, I did this on a fastq file containing data from a single sample; it started with 107179 forward reads, clipped 515F from 104227 reads, and then the resulting file has all 107179 reads with most but not all clipped. I managed to figure out how to isolate some of the unclipped reads from the resulting fastq and it seems that primers on reads that were not clipped tended to have a missing base in their middles. For example, instead of the primer being "GTGCCAGCMGCCGCGGTAA", it was "GTGCCAGCGCCGCGGTAA" and another was "GTGCCGCAGCCGCGGTAA". Here they are aligned just to make the issue more clear.
GTGCCAGCMGCCGCGGTAA
GTGCCAGC GCCGCGGTAA
GTGCC GCAGCCGCGGTAA
I guess it is also possible that there could be primers with extra bases that are also not trimmed, although I have not found any of these yet. Does anyone have any thoughts on how to make sure the clipping catches all these primers with errors, also? I figured out a way to filter the sequences with the remaining primer errors out of the files that all the correct primers were trimmed from. However, that makes it so the matching pair of deinterlieved files has different numbers of relsulting sequences, which is obviously a problem when joining reads. Any help would be greatly appreciated. Also, is there an approach to primer trimming that could be done prior to deinterleaving that I am somehow spacing out on?
Thanks for the help
Comment