Hey everyone! Sorry if this is a repeat thread; I looked around for a bit, but I'm new so it's definitely possible I missed an answer to my question.
I'm interested in Bowtie's algorithm for read alignment (found here: http://bowtie-bio.sourceforge.net/index.shtml). Specifically, does anyone know, mathematically, how changing query length will affect the runtime of Bowtie? Likewise, how is Bowtie's speed affected by increased numbers of permitted mismatches? I've read http://genomebiology.com/2009/10/3/R25 and it's extremely helpful in answering these questions (there's a chart that compares Bowtie to other algorithms with respect to different read lengths, and I understand that allowing more mismatches will introduce excessive backtracking and potentially terminate the search), but I'm trying to get a better handle, mathematically, on what's going on here.
Thanks!
I'm interested in Bowtie's algorithm for read alignment (found here: http://bowtie-bio.sourceforge.net/index.shtml). Specifically, does anyone know, mathematically, how changing query length will affect the runtime of Bowtie? Likewise, how is Bowtie's speed affected by increased numbers of permitted mismatches? I've read http://genomebiology.com/2009/10/3/R25 and it's extremely helpful in answering these questions (there's a chart that compares Bowtie to other algorithms with respect to different read lengths, and I understand that allowing more mismatches will introduce excessive backtracking and potentially terminate the search), but I'm trying to get a better handle, mathematically, on what's going on here.
Thanks!