Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Why has no one ported popluar aligners to GPGPU code?

    Has anyone attempted to convert BWA, bowtie, bfast, etc to work on GPGPU? If not, why?

  • #2
    No, because it is a low of work, and their implementations will be very closely tied to the hardware (even to specific cards). CPU implementations work generally.

    Comment


    • #3
      Just recompiling the c code to one of these GPGPU doesn't provide a whole lot of extra performance. There's just not a whole lot of floating point calculations done in alignment, it's mostly searching through ACTGs for alignment. In BWA, the "core" alignment routine is bwt_match_gap(), there are no doubles or floats declared [ in bwtgap.c ]. Even the inlined int_log2() function is integer based. So there's likely few floating point operations to optimize via a GPU. The real optimizations in short read alignments is figuring out how to keep the cache full, or perhaps precomputing alignments for common sequences and just looking up the answer or getting real close and calling an superoptimized S-W routine.

      Comment


      • #4
        CUSHAW: using an algorithm somewhat similar to bwa

        BarraCUDA: based on BWA

        SOAP3-GPU: the next version of SOAP.

        BWA-CUDA: reimplementation of bwa-sw. not in active development now

        Comment


        • #5
          Thanks Nils. I thought there was a standard such as OpenGL or CUDA that would be cross platform/GPU cards so that such code may be portable across GPUs.

          @Richard, does this mean than increasing RAM has more realized improvements than optimized/parallelized code?

          Comment


          • #6
            Certain approaches might well benefit from more RAM. This visualisation is stunning: http://i.imgur.com/X1Hi1.gif. It shows the latencies for L1,L2,Ram and disks. Note the accompanying discussion at hacker news: http://news.ycombinator.com/item?id=702713

            Any strategy that can keep code and data in the L1 and L2 caches, or data off
            of disk and in RAM for repeated access is going to benefit. Certain problems are going to benefit from this approach.

            GPGPUs are good at thinks like: "multiply these 4 numbers by those 4 numbers and put the 4 results here". If your data is a bunch of little vectors, and you're going to crunch on them a lot then moving the data to a GPGPU to do the work and then moving it back to where the CPU can get it, then it might be a good solution.

            Comment


            • #7
              Originally posted by dukzilla View Post
              Has anyone attempted to convert BWA, bowtie, bfast, etc to work on GPGPU? If not, why?
              It's a lot of work, for a couple of reasons (among others):
              • Strange programming model: GPUs really really like threads - you need thousands, preferably more. Most bioinfomatics tools are single-threaded or at best marginally multi-threaded even though using a 2 digit number of threads isn't all that hard. To apply GPU friendly numbers of threads, you need to redesign the core algorithms to fit - a much more difficult task. And GPUs hate branches - which most bioinformatics tools use a lot of - which means more re-designing.
              • Per-card optimization: Even though the code will run cross-card, you still need to optimize for each GPU family, and for best performance, even at the level of family members. And if you're not that bothered about performance, why are you doing this in the first place.
              • Limited on-chip & on-card RAM: Most bioinformatics tools like random access to a large amount of memory - GPUs have very limited memory on board (e.g. ~64KB shared between a group of threads), and access to their external (on-card) RAM is slow. Plus the on-card RAM may not even be big enough for e.g. a genome index, which means transfers from main RAM, even slower.

              At the end of the day, it's not a port, it's a complete re-write - and there isn't so much cudos in that.

              Comment


              • #8
                Considering the volume of data we are generating now, getting that data into the GPU and out once the processing is done is going to be a challenge.

                Someone with more knowledge about GPU programming can correct me but I suppose the interface (PCI-E) bus bandwidth may prove limiting unless you are using some special hardware.

                Comment


                • #9
                  Originally posted by lh3 View Post
                  CUSHAW: using an algorithm somewhat similar to bwa

                  BarraCUDA: based on BWA

                  SOAP3-GPU: the next version of SOAP.

                  BWA-CUDA: reimplementation of bwa-sw. not in active development now
                  I was looking for something similar (we have a Tesla we want to test).
                  Benchmarks for BarraCUDA show its performances are similar to bwa when MT is enabled, I would like to test it on our HW, just to see if the comparison is fair.
                  SOAP3 looks great although it has huge requirements in terms of memory and hardware.

                  d

                  Comment


                  • #10
                    CUSHAW is said to be >10X faster than bwa (1Fermi-GPU vs. 1CPU) and marginally more accurate, but the downside is it does not do gapped alignment. CUSHAW is optimized for Fermi. I do not know if CUSHAW works on Tesla at all. This is basically Nils' point: to get the best performance, you have to optimize the algorithm for a particular architecture.

                    Comment


                    • #11
                      Agree. Also, the bottleneck will always be I/O. CUDA is great for other purposes (at the moment): we are using it to extend R and Mathematica computing power (and also to perform motif analysis with cuda-meme on large datasets).

                      Comment


                      • #12
                        Spend the money on extra cores.

                        Comment


                        • #13
                          Spend the money on extra disks.

                          *sigh*

                          Comment


                          • #14
                            you can go through this paper if you want to understand the amount of code convolution required in order to get performance out of GPU.



                            I just don't think this approach is going to scale well when the problems to be solved get more and more complicated. I will bet on more RAM and a newer breed of CPUs that copy many features from GPUs and support hundreds of cores without requiring acrobatics in the code.

                            Comment


                            • #15
                              Originally posted by jpjp View Post
                              I just don't think this approach is going to scale well when the problems to be solved get more and more complicated.
                              I suspect the 80:20 rule (or more like 99:1 rule) will be the saviour here - you don't need to optimize the performance of all the code, just the key nasty part that takes up the vast majority of the execution time.

                              Originally posted by jpjp View Post
                              I will bet on more RAM and a newer breed of CPUs that copy many features from GPUs and support hundreds of cores without requiring acrobatics in the code.
                              Using a 100+ cores, be it CPU or GPU requires acrobatics

                              But it's true to some extent, as GPUs get CPU-like flexibility (and CPU get more cores) the differences will narrow, and good performance will be easier to get (but not easy by any extent - most bioinformatics tools are still single threaded).

                              Comment

                              Latest Articles

                              Collapse

                              • seqadmin
                                Strategies for Sequencing Challenging Samples
                                by seqadmin


                                Despite advancements in sequencing platforms and related sample preparation technologies, certain sample types continue to present significant challenges that can compromise sequencing results. Pedro Echave, Senior Manager of the Global Business Segment at Revvity, explained that the success of a sequencing experiment ultimately depends on the amount and integrity of the nucleic acid template (RNA or DNA) obtained from a sample. “The better the quality of the nucleic acid isolated...
                                03-22-2024, 06:39 AM
                              • seqadmin
                                Techniques and Challenges in Conservation Genomics
                                by seqadmin



                                The field of conservation genomics centers on applying genomics technologies in support of conservation efforts and the preservation of biodiversity. This article features interviews with two researchers who showcase their innovative work and highlight the current state and future of conservation genomics.

                                Avian Conservation
                                Matthew DeSaix, a recent doctoral graduate from Kristen Ruegg’s lab at The University of Colorado, shared that most of his research...
                                03-08-2024, 10:41 AM

                              ad_right_rmr

                              Collapse

                              News

                              Collapse

                              Topics Statistics Last Post
                              Started by seqadmin, Yesterday, 06:37 PM
                              0 responses
                              10 views
                              0 likes
                              Last Post seqadmin  
                              Started by seqadmin, Yesterday, 06:07 PM
                              0 responses
                              9 views
                              0 likes
                              Last Post seqadmin  
                              Started by seqadmin, 03-22-2024, 10:03 AM
                              0 responses
                              51 views
                              0 likes
                              Last Post seqadmin  
                              Started by seqadmin, 03-21-2024, 07:32 AM
                              0 responses
                              67 views
                              0 likes
                              Last Post seqadmin  
                              Working...
                              X