Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Oh... sorry, the explanation is a bit different here. By default BBNorm runs in 2-pass mode, which gives the best normalization. However, that generates temp files (which are later deleted). The final outputs are only from the second pass - reads discarded in the first pass would disappear completely.

    For what you are doing I recommend this command:

    bbnorm.sh in=input_fq1.gz in2=input_fq2.gz zerobin=t prefilter=t target=1000 min=10 passes=1 ecc=t out1=bbnorm.fq1.gz out2=bbnorm.fq2.gz outt=excluded.fq.gz

    Then the output numbers should add up as expected.

    Comment


    • #32
      Hi Brian

      I have difficulties to control the load of bbnorm on our server. Regardless which number I enter for threads=, it will always completely use all idle cores and the load average eventually goes above the number of cores.

      our java is:

      java version "1.8.0_60"
      Java(TM) SE Runtime Environment (build 1.8.0_60-b27)
      Java HotSpot(TM) 64-Bit Server VM (build 25.60-b23, mixed mode)

      Any solution?
      Best

      Comment


      • #33
        While @Brian would be along later with an official answer I feel that this may not be directly related to BBMap. If you have pigz installed on your machine then BBMap tools use it by default to uncompress files and that program may be starting additional threads that overwhelm your system.

        If pigz is installed you could turn it off by adding "pigz=f unpigz=f" to your BBMap tools commands and see if that stops the problem. Do keep using threads= option. You are not running this under a job scheduler, correct?

        Comment


        • #34
          Hi

          Thanks for the suggestion, but it didn't help. The load goes through the ceiling. I used it like so:

          bbnorm.sh -Xmx100g in= in2= out= out2= target=200 mindepth=3 threads=4 pigz=f unpigz=f

          I could do much more than 4 threads but just wanted to see what happens.

          Best

          Comment


          • #35
            Can you test these two option "gunzip=f bf2=f" and report what happens?

            Comment


            • #36
              Sorry. No change.

              Comment


              • #37
                Will have to wait on @Brian.

                For reference: How many cores/memory is available on this system? What is the size of the dataset?

                Comment


                • #38
                  40 cores Intel(R) Xeon(R) CPU E7- 8860 @ 2.27GHz

                  1TB Ram

                  Dataset: 2x 75 million 125bp reads

                  Thanks!

                  Comment


                  • #39
                    Brian, thank you so much for the excellent tools!

                    Is it possible to say at what level the error correction would be able to distinguish between sequencing errors and heterogeneity in the source sample?

                    For example, if the source was a 500bp PCR product and 2% of the molecules had a substitution at base 100, would BBnorm flag that as an error? Is there an approximate percent heterogeneity at any particular base that serves as the dividing line between 'error' and 'SNP'?

                    Thanks!

                    Comment


                    • #40
                      Garbage collection

                      If you are using Oracle's JVM (or perhaps others too), what you're seeing as excess CPU consumption from bbnorm, might actually stem from garbage collection within the JVM. This really depends on an application's behaviour.

                      There has been a lot of work on the performance of garbage collectors in Java and there are a few to choose between.

                      As a quick validation test, you could try insisting on the single-threaded collector by adding the following option to the java invocation inside the bbnorm.sh script. (Sorry doesn't seem to be a means of passing that in)

                      Code:
                      -XX:+UseSerialGC
                      You can also specify thread limits for the parallel collector. Normally, you don't have to completely restrict it to see changes in concurrency. 4 is actually quite strict on a modern multicore CPU.

                      Code:
                      -XX:ParallelGCThreads=4 -XX:ConcGCThreads=4

                      Lots of further information can be found at Oracle's VM options

                      Keep in mind that SerialGC will mean the program will likely halt briefly at GC events. So at best you should expect a penalty on runtime if the parallel GC was already working quite hard.

                      Comment


                      • #41
                        Originally posted by evanname View Post
                        Brian, thank you so much for the excellent tools!

                        Is it possible to say at what level the error correction would be able to distinguish between sequencing errors and heterogeneity in the source sample?

                        For example, if the source was a 500bp PCR product and 2% of the molecules had a substitution at base 100, would BBnorm flag that as an error? Is there an approximate percent heterogeneity at any particular base that serves as the dividing line between 'error' and 'SNP'?

                        Thanks!
                        Sorry for the very late reply, but anyway -

                        I recommend using Tadpole for error-correction now; it substantially better than BBNorm because it uses exact kmer counts and algorithms designed to take advantage of the exact counts. I now only use BBNorm for normalization and plotting kmer-frequency histograms of datasets too big to fit into memory, but not for error-correction.

                        I don't recommend doing error-correction at all on data for which you are hoping to find rare SNPs. That said, by default, BBNorm determines a base to be in error if there is at least a 1:140 ratio of kmer counts between it and the adjacent kmers, so a 2% SNP should be safe. Tadpole, on the other hand, defaults to a 1:16 ratio for detecting errors, which is much more aggressive and would wipe out a 2% SNP. Why is it more aggressive? Well... I tried to optimize the parameters for the best Spades assemblies, and Spades seems to perform best with pretty aggressive error-correction. You can change that threshold, though.

                        Comment


                        • #42
                          A question reagarding the partitioning option of BBnorm

                          Hi,
                          I want to preferentially assemble the genome of a low abundant community member from a metagenome, so I am interested in the partitioning option of BBnorm.

                          I have some questions on how to choose the best parameters though:

                          -for the other bbnorm workflows (normalization, filtering, error correction) you recommend the "prefilter" option. Is this also recommendable for the partitioning workflow? (Because this option is used in most of the example-usages of BBnorm in the documentation EXCEPT the partitioning workflow)

                          -from the description, I assumed that by giving "outlow, outmid and outhigh" arguments, the usual normalization workflow would be overridden and ALL reads would be grouped into one of these categories. However the preliminary output of BBnorm states that a "target depth" of 100 and a "min depth" of 5 is being applied. Does that mean that all reads below a coverage of five will be discarded? Do I need to adjust the "mindepth" parameter as well?

                          -Our job-submission pipeline requires the specification of a maximum RAM usage for all scripts started. However bbnorm keeps exceeding this value (which leads to a termination of the job). I kept increasing the memory limit of BBnorm using the "-Xmx" argument upto 200G, but always bbNorm exceeds the alloted limit (even if using the "prefilter" option above).
                          Do I have consider any additional memory requirements of the script, in addition to the "-Xmx" limit? How would I determine how much memory is needed?
                          (The dataset consists of about 84.547.019 read-pairs
                          loglog.sh calculated a "Cardiality" of 5.373.179.884, but I do not know how exactly to interpret this value).

                          Thanks for any suggestions.

                          Comment


                          • #43
                            Whether or not to use "prefilter" just depends on the amount of memory you have rather than the workflow. It basically makes BBNorm take twice as long but increases accuracy in cases where you have a very large dataset compared to memory - so, there's no penalty for using it, and it always increases accuracy, but the increase is trivial if you have a lot of memory. So if you have lots of ram or a small dataset you don't need it.

                            In your case the dataset has approximately 5 billion unique kmers (which is what the output of loglog.sh means).

                            As for BBNorm's memory use:

                            -Xmx is a Java flag that specifies how much much heap memory Java will use. This is most, but not all, of the memory that your job will use - there is some overhead. Normally BBNorm will auto-detect how much memory is available and everything should be fine without you needing to specify -Xmx, but that depends on the job manager and system configuration. If you manually specify memory with -Xmx, it must be lower than your requested memory for the scheduler, not higher. I recommend about 84% for our cluster, but this depends. So, basically, if you submit requesting a 100G, then set -Xmx84g. If this gets killed by the scheduler, then decrease -Xmx rather than increasing it.

                            For 5 billion unique kmers, I recommend using the prefilter flag. The overall command would be something like:

                            bbnorm.sh in=reads.fq outlow=low.fq outmid=mid.fq outhigh=high.fq passes=1 lowbindepth=10 highbindepth=80

                            Even though BBNorm will mention "target depth" and "min depth", those values will not affect your outputs - they only affect reads that go to the "out=" stream (which you did not specify), not reads that go to the "outlow=" and so forth. Sorry it's a litttle confusing.

                            Comment


                            • #44
                              I've described the algorithm in some detail in /bbmap/docs/guides/BBNormGuide.txt. I also wrote this a while back:

                              Overview

                              This program accepts input files of single or paired reads in fasta or fastq format, correcting substitution-type errors and normalizing the read depth to some desired target before outputting the result. All stages are multithreaded, allowing very high processing speed while still (optionally) maintaining strictly deterministic output.

                              Phase 1: Gather Kmer Frequencies

                              An input file of sequence data is read and processed. Each read is translated into a set of all constituent kmers of fixed k (default 31). Each kmer’s count is incremented in a shared table (a count-min sketch) whenever it is seen, so at the end of this phase, the frequencies of all kmers are known.

                              Phase 2: Correct and Normalize Reads

                              The input file is read a second time. Each read is again translated into an array of kmers, and each kmer’s count is read from the table. An error-free read is expected to have a relatively smooth profile of kmer counts, so each read is scanned for the presence of adjacent kmers with discrepant counts. For such a pair of adjacent kmers, the one with the high count is considered a “good kmer” (or genomic kmer) and the one with the low count is considered a possible “error kmer”. The single base covered by the error kmer but not the good kmer is considered the suspect “error base”. In addition to absolute cutoffs for counts of good and bad kmers, data with very high coverage is handled with a relative cutoff for the ratio of adjacent kmer counts.

                              All 4 possible replacements of the error base (A, C, G, T) are considered. For each replacement, the kmer covering the error base is regenerated, and its count read from the table. If exactly one of the four replacements has a count sufficiently high to be considered a good kmer and the other three are sufficiently low to be considered error kmers, then the error base is replaced accordingly, and the error is considered corrected. Otherwise, the error cannot be corrected; any prior corrections are rolled back, and the read is output unchanged.

                              If normalization is desired, the kmer counts from correction are re-used to determine whether a read should be discarded. If the median count is below a cutoff, the read is discarded as noise. Reads between the lower cutoff and the target depth are all retained. Otherwise, the median is above the target depth and the read is discarded with probability 1-(target/median). For paired reads, the greater median is compared to the cutoff, and the lesser median is compared to the target; the pair is either kept or discarded together. Normalization may be run using multiple passes for greater precision.
                              Note that I don't recommend BBNorm for error-correction anymore, though, since Tadpole does a much better job (which is possible because it uses exact kmer counts). So I just use BBNorm for normalization and depth partitioning.

                              Comment


                              • #45
                                @Brian Bushnell
                                Thanks a lot. Now BBnorm completed successfully.

                                Comment

                                Latest Articles

                                Collapse

                                • seqadmin
                                  Techniques and Challenges in Conservation Genomics
                                  by seqadmin



                                  The field of conservation genomics centers on applying genomics technologies in support of conservation efforts and the preservation of biodiversity. This article features interviews with two researchers who showcase their innovative work and highlight the current state and future of conservation genomics.

                                  Avian Conservation
                                  Matthew DeSaix, a recent doctoral graduate from Kristen Ruegg’s lab at The University of Colorado, shared that most of his research...
                                  03-08-2024, 10:41 AM
                                • seqadmin
                                  The Impact of AI in Genomic Medicine
                                  by seqadmin



                                  Artificial intelligence (AI) has evolved from a futuristic vision to a mainstream technology, highlighted by the introduction of tools like OpenAI's ChatGPT and Google's Gemini. In recent years, AI has become increasingly integrated into the field of genomics. This integration has enabled new scientific discoveries while simultaneously raising important ethical questions1. Interviews with two researchers at the center of this intersection provide insightful perspectives into...
                                  02-26-2024, 02:07 PM

                                ad_right_rmr

                                Collapse

                                News

                                Collapse

                                Topics Statistics Last Post
                                Started by seqadmin, 03-14-2024, 06:13 AM
                                0 responses
                                32 views
                                0 likes
                                Last Post seqadmin  
                                Started by seqadmin, 03-08-2024, 08:03 AM
                                0 responses
                                71 views
                                0 likes
                                Last Post seqadmin  
                                Started by seqadmin, 03-07-2024, 08:13 AM
                                0 responses
                                80 views
                                0 likes
                                Last Post seqadmin  
                                Started by seqadmin, 03-06-2024, 09:51 AM
                                0 responses
                                68 views
                                0 likes
                                Last Post seqadmin  
                                Working...
                                X