SEQanswers

SEQanswers (http://seqanswers.com/forums/index.php)
-   Bioinformatics (http://seqanswers.com/forums/forumdisplay.php?f=18)
-   -   Multi-threaded (faster) SAMtools (http://seqanswers.com/forums/showthread.php?t=18163)

nilshomer 03-04-2012 06:47 AM

Multi-threaded (faster) SAMtools
 
I have been working on speeding up reading and writing within SAMtools by creating a multi-threaded block gzip reader and writer. I have an alpha version working. I would appreciate some feedback and testing, just don't use it for production systems yet. Thank-you!

http://github.com/nh13/samtools/tree/pbgzip

NB: I would be happy describe the implementation, and collaborate to get this into Picard too.

Richard Finney 03-04-2012 08:03 AM

Any benchmarks?

nilshomer 03-04-2012 08:38 AM

Copied here from http://sourceforge.net/mailarchive/m...sg_id=28915492

I am working on benchmarking the samtools commands today, and will post back.

Quote:

A 4GB SAM file was used on a dual-hex-core (12 cores) computer. I
benchmarked compression then decompression, making sure the resulting files
were the same. Decompression seems to be limited by IO.

Name Compression Time Decompression Time
bgzip 485.64 39.93
pbgzip -n 1 481.57 40.02
pbgzip -n 2 240.85 41.03
pbgzip -n 4 122.05 41.79
pbgzip -n 8 63.17 41.17
pbgzip -n 12 43.12 41.65
pbgzip -n 16 39.59 41.48
pbgzip -n 20 37.03 42.41
pbgzip -n 24 34.90 47.24

nilshomer 03-04-2012 09:17 PM

Updated #s on a few commands:
Command samtools psamtools
view BAM 29.45 19.2
view -b BAM 207.51 19.36
view -S SAM 44.89 44.43
view -Sb SAM 222.64 32.62
sort 206.32 25.17
mpileup 6574.2 7252.08
depth 17.64 7.47
index 11.96 1.93
flagstat 11.73 1.73
calmd -b 209.25 22.86
rmdup -s 154.88 22.08
reheader 0.76 0.74
cat 1.54 1.37

Richard Finney 03-05-2012 09:53 AM

Looks good!
question
1) why is mpileup slower?

nilshomer 03-05-2012 12:11 PM

Working on it. I am doing this in my free time, so having one perform worse isn't that bad so far.

krobison 03-05-2012 01:03 PM

Really cool!!

Do you have benchmarks for retrieving specific reads for a region? For mpileup of a specific region or a list of targets?

Any idea if this will work with the Bio::DB::Sam perl module (which must be linked in to samtools)

What are the prospects for merging this with the main samtools development?

nilshomer 03-05-2012 01:08 PM

The seeks are just as fast, so no speedup/slowdown on seeking, but then there should speedup reading from that point on, assuming there are at least a basal number of reads in the region (otherwise there is no work to be done). For mpileup, it doesn't process the regions in parallel, if that is what you were implying.

I posted to the samtools list with response, so I have no hypothesis as to the inclusion of this (of course it needs more testing first). It generally is difficult to get things included there. I have more hope for Picard.

Pysam and the SAM perl module should not notice the difference in the API, though there is no good mechanism yet for determining the # of threads to use (it autodetects the # of cores).

krobison 03-05-2012 01:12 PM

I also see the sort command now gives an option to pick an algorithm. What a blast to the past?

Any heuristics on what algorithm might perform better in what setting?

And why no bubble sort option :-)

nilshomer 03-05-2012 02:18 PM

Quote:

Originally Posted by krobison (Post 66830)
I also see the sort command now gives an option to pick an algorithm. What a blast to the past?

Any heuristics on what algorithm might perform better in what setting?

And why no bubble sort option :-)

I just used Heng's ksort.h library. I like introsort, but mergesort is by default in the original samtools.

I have also been toying with multi-threaded sort, which sort of works in the new version, except I didn't take time to do a proper multi-way merge (one implementation requires the calculation of evenly spaced pivots). Maybe wait a few more weekends.

adaptivegenome 03-05-2012 04:45 PM

Quote:

Originally Posted by nilshomer (Post 66682)
I have been working on speeding up reading and writing within SAMtools by creating a multi-threaded block gzip reader and writer. I have an alpha version working. I would appreciate some feedback and testing, just don't use it for production systems yet. Thank-you!

http://github.com/nh13/samtools/tree/pbgzip

NB: I would be happy describe the implementation, and collaborate to get this into Picard too.

So are you saying you made a parallelized version of BZIP2? We have also been playing around with this. We parallelized the compression and decompression steps in the read/write functions of samtools for a local realignment tool we built.

I would love to learn more about what you are doing as I would hate to duplicate anything you are going to already do!

colindaven 03-06-2012 06:25 AM

As far as parallel (g)zip goes pigz works wonders : http://zlib.net/pigz/

adaptivegenome 03-06-2012 06:42 AM

PIGZ is very very fast however it produces file sizes that are much larger than BZIP2. Is this your experience as well?

It would be really nice to be able to simply parallelize BZIP2. We have tried to do this a little bit but certainly don't have completed product yet.

lh3 03-06-2012 08:49 AM

Firstly I greatly appreciate and strongly support Nils effort in multithreading samtools. The change is likely to be merged to samtools.

Re sorting algorithm: samtools sort does stable sorting (i.e. preserving the relative order of records having the same coordinate). In some rare/non-typical use cases, this feature is useful. Merge sort is stable. Introsort is not.

Re pigz: someone told me on biostar that pigz is not very scalable with many cores. If this is true (I have not tried), this must be because the gzip format has long range dependencies. bzip2 and bgzip are much easier to parallelize and probably more scalable. In addition, bzip2 has a parallel version pbzip2 which the same person told me scales very well with the number of CPU cores.

Re bzip2: I have argued a couple times here (years ago) and also on the samtools list that the key reason samtools uses gzip instead of bzip2 is because gzip is 5-10X faster on decompression. With bzip2, most samtools command will be 2-10 times slower. I think for huge data sets that need to be read frequently, gzip is always preferred over bzip2.

adaptivegenome 03-06-2012 08:57 AM

I think it is worth figuring out the best way to compress/decompress. Our nodes have 64 cores so I will do some tests and see how BZIP2 and GZIP scale. I'll post what I find on this thread.

In the meantime a quick internet search turned up this:
http://nerdbynature.de/s9y/?251


Quote:

Originally Posted by lh3 (Post 66934)
Firstly I greatly appreciate and strongly support Nils effort in multithreading samtools. The change is likely to be merged to samtools.

Re sorting algorithm: samtools sort does stable sorting (i.e. preserving the relative order of records having the same coordinate). In some rare/non-typical use cases, this feature is useful. Merge sort is stable. Introsort is not.

Re pigz: someone told me on biostar that pigz is not very scalable with many cores. If this is true (I have not tried), this must be because the gzip format has long range dependencies. bzip2 and bgzip are much easier to parallelize and probably more scalable. In addition, bzip2 has a parallel version pbzip2 which the same person told me scales very well with the number of CPU cores.

Re bzip2: I have argued a couple times here (years ago) and also on the samtools list that the key reason samtools uses gzip instead of bzip2 is because gzip is 5-10X faster on decompression. With bzip2, most samtools command will be 2-10 times slower. I think for huge data sets that need to be read frequently, gzip is always preferred over bzip2.


lh3 03-06-2012 10:33 AM

I guess that benchmark is non-typical. It is not frequent to find a file that can be compressed from 15GB to 600MB. Nonetheless, it does indicate that pigz is not scalable. Nils' pbgzip should be much better. Also, if you want to do comparison, there is another more modern variant of bzip2 that is both much faster and achieves a better compression ratio. I forgot its name. James Bonfield should know better.

adaptivegenome 03-06-2012 10:39 AM

Quote:

Originally Posted by lh3 (Post 66955)
I guess that benchmark is non-typical. It is not frequent to find a file that can be compressed from 15GB to 600MB. Nonetheless, it does indicate that pigz is not scalable. Nils' pbgzip should be much better. Also, if you want to do comparison, there is another more modern variant of bzip2 that is both much faster and achieves a better compression ratio. I forgot its name. James Bonfield should know better.

Heng,

You are right. I will give this a try using a SAM file. I wonder if the 15GB file was made by duplicating some content over and over. This would explain the compression.

adaptivegenome 03-06-2012 03:09 PM

Guys,

Below are compression times for a 6.8GB SAM file. Tested on Ubuntu 11.10 with latest versions of all software. We got the latest source for each tool and compiled it on our node. Our node has 128GB of RAM and 4x AMD Opteron(TM) Processors. Total of 64 cores.


cores pigz pbzip2 gzip bzip2
2 10m32s 9m52s xx xx
16 1m25s 1m36s xx xx
64 1m6s 0m34s xx xx
1 xx xx 21m18s 19m16s

The pbzip file was 1.7GB and the pigz file was 2GB so not as big as difference as I thought.

nilshomer 03-06-2012 04:21 PM

It should not be too hard to make a bz2 BAM file, using the bz2 library: BZ2_bzBuffToBuffCompress and BZ2_bzBuffToBuffDecompress. Of course, there are better methods than just using the aforementioned functions (see pbzip2).

I am not sure how necessary all the signalling is in the current implementation, but debugging race conditions is a pain.

adaptivegenome 03-06-2012 04:26 PM

But is it worth it? BZIP wins on parallelization with lots of cores, but is this useful? I thought samtools reads and write in small blocks that are separately compressed and decompressed. So it seems you can just parallelize that, right? Do you really benefit from using BZIP over GZIP?


All times are GMT -8. The time now is 03:36 PM.

Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2019, vBulletin Solutions, Inc.