View Single Post
Old 01-12-2017, 10:02 AM   #43
Brian Bushnell
Super Moderator
 
Location: Walnut Creek, CA

Join Date: Jan 2014
Posts: 2,695
Default

Whether or not to use "prefilter" just depends on the amount of memory you have rather than the workflow. It basically makes BBNorm take twice as long but increases accuracy in cases where you have a very large dataset compared to memory - so, there's no penalty for using it, and it always increases accuracy, but the increase is trivial if you have a lot of memory. So if you have lots of ram or a small dataset you don't need it.

In your case the dataset has approximately 5 billion unique kmers (which is what the output of loglog.sh means).

As for BBNorm's memory use:

-Xmx is a Java flag that specifies how much much heap memory Java will use. This is most, but not all, of the memory that your job will use - there is some overhead. Normally BBNorm will auto-detect how much memory is available and everything should be fine without you needing to specify -Xmx, but that depends on the job manager and system configuration. If you manually specify memory with -Xmx, it must be lower than your requested memory for the scheduler, not higher. I recommend about 84% for our cluster, but this depends. So, basically, if you submit requesting a 100G, then set -Xmx84g. If this gets killed by the scheduler, then decrease -Xmx rather than increasing it.

For 5 billion unique kmers, I recommend using the prefilter flag. The overall command would be something like:

bbnorm.sh in=reads.fq outlow=low.fq outmid=mid.fq outhigh=high.fq passes=1 lowbindepth=10 highbindepth=80

Even though BBNorm will mention "target depth" and "min depth", those values will not affect your outputs - they only affect reads that go to the "out=" stream (which you did not specify), not reads that go to the "outlow=" and so forth. Sorry it's a litttle confusing.
Brian Bushnell is offline   Reply With Quote