SEQanswers

Go Back   SEQanswers > Bioinformatics > Bioinformatics



Similar Threads
Thread Thread Starter Forum Replies Last Post
SeqMonk: Visualisation and analysis for large mapped data sets simonandrews Bioinformatics 313 08-02-2018 01:01 PM
Comparing sets of contigs LizBent Bioinformatics 1 05-17-2012 02:39 AM
DESeq for a small sets of sequences without replicates starlight Bioinformatics 6 09-05-2011 10:39 AM
Programs comparing gene sets 454andSolid Bioinformatics 0 03-16-2010 02:22 AM
comparing sequences and mapping annotations? gaster Bioinformatics 0 07-06-2009 01:33 PM

Reply
 
Thread Tools
Old 01-06-2013, 10:24 PM   #1
gsgs
Senior Member
 
Location: germany

Join Date: Oct 2009
Posts: 140
Default comparing large sets of sequences

comparing large sets of sequences

suppose you have 2 large sequences or sets of sequences that you want to compare
for matching entries.
E.g. you sequenced some ancient bone and want to check for bacterial contamination

For simplicity assume you have 2 sets of 1000 nucleotide sequences of length 1000 ,
1GB each set that you want to compare against each other, find the best pairs of matching
sequences or subsequences.
Sounds like a standard problem, doesn't it ?

How is it done ? What is the best, fastest method ?
gsgs is offline   Reply With Quote
Old 01-07-2013, 07:10 AM   #2
xied75
Senior Member
 
Location: Oxford

Join Date: Feb 2012
Posts: 129
Default

1000 of 1000bp is 1MB?

When you say match you mean string match?
When you say subsequence match you mean like substring?
Or you mean the best aligned pair? (Smith-Waterman)
xied75 is offline   Reply With Quote
Old 01-07-2013, 07:41 AM   #3
gsgs
Senior Member
 
Location: germany

Join Date: Oct 2009
Posts: 140
Default

yes, 1000*1000 bp = 1MB as fasta file, sorry I miscalculated.
So, lets say 1000000 sequences of 1000 bp

stringmatch or whatever is suitable to find genetical relatives

yes, I meant substring instead of subsequence

Smith Waterman would be too slow ?!
gsgs is offline   Reply With Quote
Old 01-07-2013, 08:06 AM   #4
xied75
Senior Member
 
Location: Oxford

Join Date: Feb 2012
Posts: 129
Default

Full string exact match is easy to tell, how do you want compare substring then?
xied75 is offline   Reply With Quote
Old 01-07-2013, 08:20 AM   #5
gsgs
Senior Member
 
Location: germany

Join Date: Oct 2009
Posts: 140
Default

OK, this came up in another thread (well, 2 threads) before and I thought to myself
that the methods being used are just ineffective and there is a better way.

I build a binary table of used substrings of length 15 (well, length 16 or 17 if both files are 1GB ?) in file 1
and then look it up for each new read nucleotide (and thus 15-substring) in file 2
This is almost as fast as reading the two files from HD into memory.

But just checking for 15-substring-matches is not enough, too short, too many random matches.
So I check for 30-substrings each of whose 16 15-substrings are marked in the binary table.

I found that this worked very well in practice and was wondering how the method is called and
implementations or papers about it, but couldn't find any.
Instead I found lots of info about blast and other methods which apparently are much slower and more complicated and lots of efforts put into it.
gsgs is offline   Reply With Quote
Old 01-07-2013, 08:33 AM   #6
xied75
Senior Member
 
Location: Oxford

Join Date: Feb 2012
Posts: 129
Default

What you are doing is 'Hash Join' in RDBMS. It's like

Code:
select * from t1 inner join t2 on t1.column1 = t2.column1
Then the DB engine will build a hash out of t2, then using t1 rows to probe this hash.

So in the end you want a report saying two sets have xxxxx rows in common, xxxx for set A only, xxxx for set B only, and draw a Venn diagram?

This is EXACT match. If Inexact match allowed, i.e. a defined number of mismatch, gap open, etc. then this turns into classic alignment problem.
xied75 is offline   Reply With Quote
Old 01-07-2013, 08:46 AM   #7
gsgs
Senior Member
 
Location: germany

Join Date: Oct 2009
Posts: 140
Default

--------------------------------------------------------------------------------

> What you are doing is 'Hash Join' in RDBMS.

Thanks.

I give the number (and %) of matching 15-substrings, number of matching 30-15-substrings
(substrings of length 30 each of whose 15 substrings are marked in the table)
with an option to print the matching 30-15-substrings, the record number
if it's a fasta-file, and the entry-position number in the record

You could allow for gaps, so e.g. only 15 or 14 of the 16 substrings are required to be marked
or mark the double number of strings from file 1 (gap somewhere) or such.
But I don't feel that this would improve things a lot.

30,15 are variable, depending on filesize and available memory, memory-cache size



-------------------------------------------------

RDBMS:
http://en.wikipedia.org/wiki/Relatio...agement_system

http://en.wikipedia.org/wiki/Relational_model

http://en.wikipedia.org/wiki/Hash_join

Last edited by gsgs; 01-07-2013 at 08:50 AM.
gsgs is offline   Reply With Quote
Old 01-07-2013, 09:06 AM   #8
xied75
Senior Member
 
Location: Oxford

Join Date: Feb 2012
Posts: 129
Default

Ok, you are using a Moving Window of size 15 to build the hash. That means you can't have mismatch or gap within this 15b (it's the key to look up from hash table), you can have gap or mismatch between multiple hits, but not within any 15b hit.

This will take too much memory once the set is large.

This is almost the scheme of first gen aligner, including ELAND, MAQ, and SOAPv1.

If you want a fast inexact solution, I'll take one set into FASTA, call BWA INDEX on it; then take another set into FASTQ, call BWA ALN. (Many people do this.)
xied75 is offline   Reply With Quote
Old 01-08-2013, 12:22 AM   #9
gsgs
Senior Member
 
Location: germany

Join Date: Oct 2009
Posts: 140
Default

if n is the number of nucleotides in the smaller one of the 2 files to be compared
then the memory requirement is ~4*n bits or n/2 bytes.
I tried it with the 15-substring table (4^15 bits = 134MB) on human chromosomes of length
> 200MB and it seemed to work well.

fake hits are usually substrings with repeats or high content with one or 2 nucleotides only,
i.e. high content of nucleotide T. Now I need a database with frequent such "fake" hits
so I can exclude them ...does it exist ?
No big problem, if also real hits are excluded, the sequences are long and there will be other hits
if there is real common ancestry.

searching for the mentioned software, I found:
http://en.wikipedia.org/wiki/List_of...nment_software

so many programs ...
I still don't understand why anyone would use a different method, at least as a first step
to reduce the size of the file of possible candidates.
What can be faster than basically just the time required to load the data into memory ? (O(n))
Fast memory caching could become a problem with GB-sets, but I didn't see this yet.

----------------------------------------------------------------------------------------------

e.g. comparing human chromosome 1: (30,15)

224999690 15-substrings were read from 1 sequences from file f:\hg18\chr01
these gave 136840909 (=60.82%) different markings in the table

chimpanzee
217189828 15-substrings were read from file f:\chimp\chr01
192230629 (=88.51%) of these were marked in the table
155119260 (=71.42%)matching 30-15-substrings were found

gorilla
212549001 15-substrings were read from file f:\gorill\chr01
186188324 (=87.60%) of these were marked in the table
143706656 (=67.61%)matching 30-15-substrings were found

macaca mulatta
219576101 15-substrings were read from file f:\macmul\chr01
139018325 (=63.31%) of these were marked in the table
54682566 (=24.90%)matching 30-15-substrings were found

human chromosome 2 (unrelated)
237709794 15-substrings were read from file f:\hg18\chr02
123413391 (=51.92%) of these were marked in the table
31633817 (=13.31%)matching 30-15-substrings were found
(repititions,unusual strings, etc.)
gsgs is offline   Reply With Quote
Old 01-09-2013, 12:26 AM   #10
Jeremy
Senior Member
 
Location: Pathum Thani, Thailand

Join Date: Nov 2009
Posts: 190
Default

Surely blast or CD-HIT would be easier than trying to code your own algorithm?
Jeremy is offline   Reply With Quote
Old 01-09-2013, 12:35 AM   #11
gsgs
Senior Member
 
Location: germany

Join Date: Oct 2009
Posts: 140
Default

yes, but why are they all doing it the wrong way
despite so much research and papers ??
gsgs is offline   Reply With Quote
Old 01-10-2013, 04:28 AM   #12
bioBob
Member
 
Location: Virginia

Join Date: Mar 2011
Posts: 72
Default

If you are going to use large but exact matches, why not use blast and up the word size? This goes pretty fast at say 19.
bioBob is offline   Reply With Quote
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off




All times are GMT -8. The time now is 07:42 AM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2020, vBulletin Solutions, Inc.
Single Sign On provided by vBSSO