SEQanswers

Go Back   SEQanswers > Bioinformatics > Bioinformatics



Similar Threads
Thread Thread Starter Forum Replies Last Post
Sam file smaller than fastq scami Bioinformatics 6 10-01-2015 05:25 AM
Read in mutiple gzipped FastQ files using R Nicolas_15 Bioinformatics 4 09-04-2015 01:47 PM
fastx quality trimmer and gzipped fastq balsampoplar Bioinformatics 4 03-10-2014 06:53 AM
Script for breaking large .fa files into smaller files of [N] sequences lac302 Bioinformatics 3 02-21-2014 04:49 PM
Split fastq into smaller files lorendarith Bioinformatics 10 12-13-2012 04:28 AM

Reply
 
Thread Tools
Old 12-03-2016, 10:57 AM   #1
Brian Bushnell
Super Moderator
 
Location: Walnut Creek, CA

Join Date: Jan 2014
Posts: 2,553
Default Introducing Clumpify: Create 30% Smaller, Faster Gzipped Fastq Files

I'd like to introduce a new member of the BBMap package, Clumpify. This is a bit different from other tools in that it does not actually change your data at all, simply reorders it to maximize gzip compression. Therefore, the output files are still fully-compatible gzipped fastq files, and Clumpify has no effect on downstream analysis aside from making it faster. It’s quite simple to use:

Code:
clumpify.sh in=reads.fq.gz out=clumped.fq.gz reorder
This command assumes paired, interleaved reads or single-ended reads; Clumpify does not work with paired reads in twin files (they would need to be interleaved first). You can, of course, first interleave twin files into a single file with Reformat, clumpify them, and then de-interleave the output into twin files, and still gain the compression advantages.

How does this work? Clumpify operates on a similar principle to that which makes sorted bam files smaller than unsorted bam files – the reads are reordered so that reads with similar sequence are nearby, which makes gzip compression more efficient. But unlike sorted bam, during this process, pairs are kept together so that an interleaved file will remain interleaved with pairing intact. Also unlike a sorted bam, it does not require mapping or a reference, and except in very unusual cases, can be done with an arbitrarily small amount of memory. So, it’s very fast and memory-efficient compared to mapping, and can be done with no knowledge of what organism(s) the reads came from.

Internally, Clumpify forms clumps of reads sharing special ‘pivot’ kmers, implying that those reads overlap. These clumps are then further sorted by position of the kmer in the read so that within a clump the reads are position-sorted. The net result is a list of sorted clumps of reads, yielding compression within a percent or so of sorted bam.

How long does Clumpify take? It's very fast. If all data can fit in memory, Clumpify needs the amount of time it takes to read and write the file once. If the data cannot fit in memory, it takes around twice that long.

Why does this increase speed? There are a lot of processes that are I/O limited. For example, on a multicore processor, using BBDuk, BBMerge, Reformat, etc. on a gzipped fastq will generally be rate-limited by gzip decompression (even if you use pigz, which is much faster at decompression than gzip). Gzip decompression seems to be rate-limited by the number of input bytes per second rather than output, meaning that a raw file of a given size will decompress X% faster if it is compressed Y% smaller; here X and Y are proportional, though not quite 1-to-1. In my tests, assembly with Spades and Megahit have time reductions from using Clumpified input that more than pays for the time needed to run Clumpify, largely because both are multi-kmer assemblers which read the input file multiple times. Something purely CPU-limited like mapping would normally not benefit much in terms of speed (though still a bit due to improved cache locality).

When and how should Clumpify be used? If you want to clumpify data for compression, do it as early as possible (e.g. on the raw reads). Then run all downstream processing steps ensuring that read order is maintained (e.g. use the “ordered” flag if you use BBDuk for adapter-trimming) so that the clump order is maintained; thus, all intermediate files will benefit from the increased compression and increased speed. I recommend running Clumpify on ALL data that will ever go into long-term storage, or whenever there is a long pipeline with multiple steps and intermediate gzipped files. Also, even when data will not go into long-term storage, if a shared filesystem is being used or files need to be sent over the internet, running Clumpify as early as possible will conserve bandwidth. The only times I would not clumpify data are enumerated below.

When should Clumpify not be used? There are a few cases where it probably won’t help:

1) For reads with a very low kmer depth, due to either very low coverage (like 1x WGS) or super-high-error-rate (like raw PacBio data). It won’t hurt anything but won’t accomplish anything either.

2) For large volumes of amplicon data. This may work, and may not work; but if all of your reads are expected to share the same kmers, they may all form one giant clump and again nothing will be accomplished. Again, it won’t hurt anything, and if pivots are randomly selected from variable regions, it might increase compression.

3) When your process is dependent on the order of reads. If you always grab the first million reads from a file assuming they are a good representation of the rest of the file, Clumpify will cause your assumption to be invalid – just like grabbing the first million reads from a sorted bam file would not be representative. Fortunately, this is never a good practice so if you are currently doing that, now would be a good opportunity to change your pipeline anyway. Randomly subsampling is a much better approach.

4) If you are only going to read data fewer than ~3 times, it will never go into long-term storage, and it's being used on local disk so bandwidth is not an issue, there's no point in using Clumpify (or gzip, for that matter).

As always, please let me know if you have any questions, and please make sure you are using the latest version of BBTools when trying new functionality.


P.S. For maximal compression, you can output bzipped files by using the .bz2 extension instead of .gz, if bzip2 or pbzip2 is installed. This is actually pretty fast if you have enough cores and pbzip2 installed, and furthermore, with enough cores, it decompresses even faster than gzip. This increases compression by around 9%.

Last edited by Brian Bushnell; 12-03-2016 at 11:23 PM.
Brian Bushnell is offline   Reply With Quote
Old 12-04-2016, 08:04 AM   #2
GenoMax
Senior Member
 
Location: East Coast USA

Join Date: Feb 2008
Posts: 6,300
Default

Can this be extended to identify PCR-duplicates and optionally flag or eliminate them?

Would piping output of clumpify into dedupe achieve fast de-duplication?
GenoMax is offline   Reply With Quote
Old 12-04-2016, 08:05 AM   #3
GenoMax
Senior Member
 
Location: East Coast USA

Join Date: Feb 2008
Posts: 6,300
Default

Going to put in a plug for tens of other things BBMap suite members can do. A compilation is available in this thread.
GenoMax is offline   Reply With Quote
Old 12-04-2016, 12:00 PM   #4
Brian Bushnell
Super Moderator
 
Location: Walnut Creek, CA

Join Date: Jan 2014
Posts: 2,553
Default

Quote:
Originally Posted by GenoMax View Post
Can this be extended to identify PCR-duplicates and optionally flag or eliminate them?
That's a good idea; I'll add that. The speed would still be similar to Dedupe, but it would eliminate the memory requirement.

Quote:
Would piping output of clumpify into dedupe achieve fast de-duplication?
Hmmm, you certainly could do that, but I don't think it would be overly useful. Piping Clumpify to Dedupe would end up making the process slower overall, and Dedupe reorders the reads randomly so it would lose the benefit of running Clumpify. I guess I really need to add an "ordered" option to Dedupe; I'll try to do that next week.
Brian Bushnell is offline   Reply With Quote
Old 12-06-2016, 07:47 AM   #5
vout
Junior Member
 
Location: Hong Kong

Join Date: Jun 2015
Posts: 4
Default

Quote:
Originally Posted by Brian Bushnell View Post
In my tests, assembly with Spades and Megahit have time reductions from using Clumpified input that more than pays for the time needed to run Clumpify, largely because both are multi-kmer assemblers which read the input file multiple times. Something purely CPU-limited like mapping would normally not benefit much in terms of speed (though still a bit due to improved cache locality).
In fact, Megahit does not read the input files multiple times. It converts the fastq/a files into a binary format and read the binary file multiple times. I guess that cache locality is the key. Imagine that the same group of k-mers are processed (in different components of Megahit -- graph construction: assign k-mers to different buckets then sorting; local assembly & extracting iterative k-mers: insert k-mers into a hash table)... In this regard, alignment tools may also benefit from it substantially.

Great work Brian.
vout is offline   Reply With Quote
Old 12-06-2016, 08:14 AM   #6
GenoMax
Senior Member
 
Location: East Coast USA

Join Date: Feb 2008
Posts: 6,300
Default

Quote:
If all data can fit in memory, Clumpify needs the amount of time it takes to read and write the file once. If the data cannot fit in memory, it takes around twice that long.
Is there a way to force clumpify to use just memory (if enough is available) instead of writing to disk?

Edit: On second thought that may not be practical/useful but I will leave the question in for now to see if @Brian has any pointers.

For a 12G input gziped fastq file, clumpify made 28 temp files (each between 400-600M in size).

Edit 2: Final file size was 6.8G so a significant reduction in size.

Last edited by GenoMax; 12-06-2016 at 11:38 AM.
GenoMax is offline   Reply With Quote
Old 12-06-2016, 08:20 AM   #7
Markiyan
Member
 
Location: Cambridge

Join Date: Sep 2010
Posts: 84
Question Any chances of including the fasta support for aminoacid sequences?

Dear Brian,

Thanks you very much for the tool, can be very helpful for io bound cloud folks.

Are there any plans for including fasta support for aminoacid sequences
(group the similar proteins together)?

Must support very long fasta ID lines - up to 10Kb.
Markiyan is offline   Reply With Quote
Old 12-06-2016, 09:40 AM   #8
Brian Bushnell
Super Moderator
 
Location: Walnut Creek, CA

Join Date: Jan 2014
Posts: 2,553
Default

Quote:
Originally Posted by vout View Post
In fact, Megahit does not read the input files multiple times. It converts the fastq/a files into a binary format and read the binary file multiple times. I guess that cache locality is the key. Imagine that the same group of k-mers are processed (in different components of Megahit -- graph construction: assign k-mers to different buckets then sorting; local assembly & extracting iterative k-mers: insert k-mers into a hash table)... In this regard, alignment tools may also benefit from it substantially.

Great work Brian.
Well, you know what they say about assumptions! Thanks for that tidbit. For reference, here is a graph of the effect of Clumpify on Megahit times. I just happened to be testing Megahit and Clumpify at the same time, and this was the first time I noticed that Clumpify accelerated assembly; I wasn't really sure why, but assumed it was either due to cache locality or reading speed.



Incidentally, Clumpify has an error-correction mode, but I was unable to get that to improve Megahit assemblies (even though it does improve Spades assemblies). Megahit has thus far been recalcitrant to my efforts to improve its assemblies with any form of error-correction, which I find somewhat upsetting In the above graph, "asm3" has the least pre-processing (no kmer-based error-correction) and so is the most reflective of the times we would get in practice; some of the other ones have low-depth reads discarded. And to clarify, the blue bars are the time for Megahit to assemble the non-clumpified reads, while the green bars are the times for Clumpified reads; in each case the input data is identical aside from read order. The assembly continuity stats were almost identical though not quite due to Megahit's non-determinisim, but the differences were trivial.

Quote:
Originally Posted by Genomax
Is there a way to force clumpify to use just memory (if enough is available) instead of writing to disk?

Edit: On second thought that may not be practical/useful but I will leave the question in for now to see if @Brian has any pointers.

For a 12G input gziped fastq file, clumpify made 28 temp files (each between 400-600M in size).
Clumpify tests the size and compressibility at the beginning, and then *very conservatively* guesses how many temp files it needs based on projecting the memory use of the input (note that it is impossible to determine the decompressed size of a gzipped file without fully decompressing it, which takes too long). If it is confident everything can fit into memory with with a 250% safety margin then it will just use one group and not write any temp files. I had to make it very conservative to be safely used in production; sometimes there are weird degenerate cases with, say, length-1 reads or where everything is poly-A or poly-N that are super-compressible but use a lot of memory. You can manually force it to use one group with the flag "groups=1". With the "reorder" flag, a single group will compress better, since reorder does not work with multiple groups. Also, a single group is faster, so it's preferable. The only risk is running out of memory and crashing when forcing "groups=1".

Quote:
Originally Posted by Markiyan
Dear Brian,

Thanks you very much for the tool, can be very helpful for io bound cloud folks.

Are there any plans for including fasta support for aminoacid sequences
(group the similar proteins together)?

Must support very long fasta ID lines - up to 10Kb.
There's no support for that planned, but nothing technically preventing it. However, Clumpify is not a universal compression utility - it will only increase compression when there is coverage depth (meaning, redundant information). So, for a big 10GB file of amino acid sequences - if they were all different proteins, there would not be redundant information, and they would not compress; on the other hand, if there were many copies of the same proteins from different but very closely-related organisms, or different isoforms of the same proteins scattered around randomly in the file, then Clumpify would group them together, which would increase compression.
Attached Images
File Type: png clump_megahit.png (27.3 KB, 190 views)

Last edited by Brian Bushnell; 12-06-2016 at 09:48 AM.
Brian Bushnell is offline   Reply With Quote
Old 12-07-2016, 01:41 AM   #9
Markiyan
Member
 
Location: Cambridge

Join Date: Sep 2010
Posts: 84
Lightbulb

Quote:
Originally Posted by Brian Bushnell View Post
There's no support for that planned, but nothing technically preventing it. However, Clumpify is not a universal compression utility - it will only increase compression when there is coverage depth (meaning, redundant information). So, for a big 10GB file of amino acid sequences - if they were all different proteins, there would not be redundant information, and they would not compress; on the other hand, if there were many copies of the same proteins from different but very closely-related organisms, or different isoforms of the same proteins scattered around randomly in the file, then Clumpify would group them together, which would increase compression.
OK, so in order to cluster aminoacid sequences with current clumpify version it means:
1. parse fasta, reverse translate to DNA. Using a single codon for each aminoacid;
2. save as nt fastq;
3. clumpify;
4. parse fastq, translate;
5. save as aa fasta.
Markiyan is offline   Reply With Quote
Old 12-07-2016, 03:58 AM   #10
GenoMax
Senior Member
 
Location: East Coast USA

Join Date: Feb 2008
Posts: 6,300
Default

Quote:
Originally Posted by Markiyan View Post
OK, so in order to cluster aminoacid sequences with current clumpify version it means:
1. parse fasta, reverse translate to DNA. Using a single codon for each aminoacid;
2. save as nt fastq;
3. clumpify;
4. parse fastq, translate;
5. save as aa fasta.
Or you could just use CD-HIT.
GenoMax is offline   Reply With Quote
Old 12-07-2016, 09:37 AM   #11
Brian Bushnell
Super Moderator
 
Location: Walnut Creek, CA

Join Date: Jan 2014
Posts: 2,553
Default

Whether you use Clumpify or CD-Hit, I'd be very interested if you could post the file size results before and after.

Incidentally, you can use BBTools to do AA <-> NT translation like this:

Code:
translate6frames.sh in=proteins.faa.gz aain=t aaout=f out=nt.fna
clumpify.sh in=nt.fna out=clumped.fna
translate6frames.sh in=clumped.fna out=protein2.faa.gz frames=1 tag=f zl=6
Brian Bushnell is offline   Reply With Quote
Old 12-07-2016, 11:35 AM   #12
Brian Bushnell
Super Moderator
 
Location: Walnut Creek, CA

Join Date: Jan 2014
Posts: 2,553
Default

I ran some benchmarks on 100x NextSeq E.coli data, to compare file sizes under various conditions:



This shows the file size, in bytes. Clumpified data is almost as small as mapped, sorted data, but takes much less time. The exact sizes were:
Code:
100x.fq.gz	360829483
clumped.fq.gz	251014934
That's a 30.4% reduction. Note that this was for NextSeq data without binned quality scores. When the quality scores are binned (as is the default for NextSeq) the increase in compression is even greater:

Code:
100x_binned.fq.gz	267955329
clumped_binned.fq.gz	161766626
...a 39.6% reduction. I don't recommend quality-score binning, though Clumpify does have the option of doing so (with the quantize flag).



This is the script I used to generate these sizes and times:
Code:
time clumpify.sh in=100x.fq.gz out=clumped_noreorder.fq.gz
time clumpify.sh in=100x.fq.gz out=clumped.fq.gz reorder
time clumpify.sh in=100x.fq.gz out=clumped_lowram.fq.gz -Xmx1g
time clumpify.sh in=100x.fq.gz out=clumped.fq.bz2 reorder
time reformat.sh in=100x.fq.gz out=100x.fq.bz2
time bbmap.sh in=100x.fq.gz ref=ecoli_K12.fa.gz out=mapped.bam bs=bs.sh; time sh bs.sh
reformat.sh in=mapped_sorted.bam out=sorted.fq.gz zl=6
reformat.sh in=mapped_sorted.bam out=sorted.sam.gz zl=6
reformat.sh in=mapped_sorted.bam out=sorted.fq.bz2 zl=6
Attached Images
File Type: png clump_size.png (15.5 KB, 174 views)
File Type: png clump_time.png (9.6 KB, 174 views)
Brian Bushnell is offline   Reply With Quote
Old 12-14-2016, 11:11 AM   #13
sklages
Senior Member
 
Location: Berlin, DE

Join Date: May 2008
Posts: 611
Default

Interesting tool. Though I'd wish it could deal with "twin files", as these are the initial "raw files" of Illumina's bcl2fastq output. Additionally many tools require the pairs to be separated ... converting back and forth :-)
sklages is offline   Reply With Quote
Old 12-14-2016, 12:32 PM   #14
Brian Bushnell
Super Moderator
 
Location: Walnut Creek, CA

Join Date: Jan 2014
Posts: 2,553
Default

OK, I'll make a note of that... there's nothing preventing paired file support, it's just simpler to write for interleaved files when there are stages involving splitting into lots of temp files. But I can probably add it without too much difficulty.
Brian Bushnell is offline   Reply With Quote
Old 12-16-2016, 04:43 PM   #15
chiayi
Member
 
Location: New York

Join Date: Dec 2016
Posts: 22
Default

Hello Brian,

I started to use clumpify and the size was on average reduced ~25% for NextSeq Arabidopsis data. Thanks for the development!

In a recent run for HiSeq maize data, I got an error for some (but not all) of the files. At first the run would stuck at fetching and eventually fail due to not enough memory (set 16G), despite the memory estimate was ~ 2G.

HTML Code:
Clumpify version 36.71
Memory Estimate:        2685 MB
Memory Available:       12836 MB
Set groups to 1
Executing clump.KmerSort [in=input.fastq.bz2, out=clumped.fastq.gz, groups=1, ecco=false, rename=false, shortname=f, unpair=false, repair=false, namesort=false, ow=true, -Xmx16g, reorder=t]

Making comparator.
Made a comparator with k=31, seed=1, border=1, hashes=4
Starting cris 0.
Fetching reads.
Making fetch threads.
Starting threads.
Waiting for threads.
=>> job killed: mem job total 17312912 kb exceeded limit 16777216 kb
When I increased to 48G, the run was killed at making clumps and didn't have a specific reason,

HTML Code:
Starting threads.
Waiting for threads.
Fetch time:     321.985 seconds.
Closing input stream.
Combining thread output.
Combine time:   0.108 seconds.
Sorting.
Sort time:  33.708 seconds.
Making clumps.
/home/cc5544/bin/clumpify.sh: line 180: 45220 Killed
Do you know what may be the cause of this situation? Thank you.
chiayi is offline   Reply With Quote
Old 12-16-2016, 05:18 PM   #16
Brian Bushnell
Super Moderator
 
Location: Walnut Creek, CA

Join Date: Jan 2014
Posts: 2,553
Default

It looks like in both cases, Clumpify did not run out of memory, but was killed by your job scheduling system or OS. This can happen sometimes when the job scheduler is designed to instantly kill processes when virtual memory exceeds a quota; it used to happen on JGI's cluster until we made some adjustments. The basic problem is this:

When Clumpify (or any other program) spawns a subprocess, that uses a fork operation, and the OS temporarily allocates twice the original virtual memory. It seems very strange to me, but here's what happens in practice:

1) You run Clumpify on a .bz2 file, and tell Clumpify to use 16 GB with the flag -Xmx16g, or similar. Even if it only needs 2GB of RAM to store the input, it will still use (slightly more than) 16 GB of virtual memory.
2) Clumpify sees that the input file is .bz2. Java cannot natively process bzipped files, so it starts a subprocess running bzip2 or pbzip2. That means a fork occurs and for a tiny fraction of a second the processes are using 32GB of virtual memory (even though at that point nothing has been loaded, so the physical memory being used is only 40 MB or so). After that fraction of a second, Clumpify will still be using 16 GB of virtual memory and 40 MB of physical memory, and the bzip2 process will be using a few MB of virtual and physical memory.
3) The job scheduler looks at the processes every once in a while to see how much memory they are using. If you are unlucky, it might look right at the exact moment of the fork. Then, if you only scheduled 16 GB and are using 32 GB of virtual memory, it will kill your process, even though you are only using 40 MB of physical memory at that time.

Personally, I consider this to be a major bug in the job schedulers that have this behavior. Also, not allowing programs to over-commit virtual memory (meaning, use more virtual memory than is physically present) is generally a very bad idea. Virtual memory is free, after all. What job scheduler are you using? And do you know what your cluster's policy is for over-comitting virtual memory?

I think that in this case the system will allow the program to execute and not kill it if you request 48 GB, but add the flag "-Xmx12g" to Clumpify. That way, even when it uses a fork operation to read the bzipped input, and potentially another fork operation to write the gzipped output with pigz, it will still stay under the 48 GB kill limit. Alternately you could decompress the input before running Clumpify and tell it not to use pigz with the pigz=f flag, but I think changing the memory settings is a better solution because that won't affect speed.

As for the 25% file size reduction - that's fairly low for NextSeq data with binned quality scores; for gzip in and gzip out, I normally see ~39%. Clumpify can output bzipped data if you name the output file as .bz2; if your pipeline is compatible with bzipped data, that should increase the compression ratio a lot, since .bz2 files are smaller than .gz files. Of course, unless you are using pbzip2 the speed will be much lower; but with pbzip2 and enough cores, .bz2 files compress fast and decompress even faster than .gz.

Anyway, please try with requesting 48GB and using the -Xmx12g flag (or alternately requesting 16GB and using -Xmx4g) and let me know if that resolves the problem.

Oh, I should also mention that if you request 16GB, then even if the program is not doing any forks, you should NOT use the flag -Xmx16g, you should use something like -Xmx13g (roughly 85% of what you requested). Why? -Xmx16g sets the heap size, but Java needs some memory for other things too (like per-thread stack memory, memory for loading classes, memory for the virtual machine, etc). So if you need to set -Xmx manually because the memory autodetection does not work (in which case, I'd like to hear the details about what the program does when you don't define -Xmx, because I want to make it as easy to use as possible) then please allow some overhead. Requesting 16GB and using the flag -Xmx16G is something I would expect to always fail on systems that to not allow virtual memory overcommit. In other words, possibly, your first command would work fine if you just changed the -Xmx16g to -Xmx13g.
Brian Bushnell is offline   Reply With Quote
Old 12-16-2016, 07:12 PM   #17
chiayi
Member
 
Location: New York

Join Date: Dec 2016
Posts: 22
Default

Thank you so much for the thorough explanation. I tried a couple things and please find the reports as follows:

Quote:
Originally Posted by Brian Bushnell View Post
Personally, I consider this to be a major bug in the job schedulers that have this behavior. Also, not allowing programs to over-commit virtual memory (meaning, use more virtual memory than is physically present) is generally a very bad idea. Virtual memory is free, after all. What job scheduler are you using? And do you know what your cluster's policy is for over-comitting virtual memory?
I'm not sure about the answers to these two questions. I will need to ask around and get back to you.

Quote:
Anyway, please try with requesting 48GB and using the -Xmx12g flag (or alternately requesting 16GB and using -Xmx4g) and let me know if that resolves the problem.
Still ran out of memory with 48GB r'q and -Xmx12g tag:

HTML Code:
java.lang.OutOfMemoryError: GC overhead limit exceeded
    at stream.FASTQ.makeId(FASTQ.java:567)
    at stream.FASTQ.quadToRead(FASTQ.java:785)
    at stream.FASTQ.toReadList(FASTQ.java:710)
    at stream.FastqReadInputStream.fillBuffer(FastqReadInputStream.java:111)
    at stream.FastqReadInputStream.nextList(FastqReadInputStream.java:96)
    at stream.ConcurrentGenericReadInputStream$ReadThread.readLists(ConcurrentGenericReadInputStream.java:656)
    at stream.ConcurrentGenericReadInputStream$ReadThread.run(ConcurrentGenericReadInputStream.java:635)

This program ran out of memory.
Try increasing the -Xmx flag and using tool-specific memory-related parameters.
Quote:
So if you need to set -Xmx manually because the memory autodetection does not work (in which case, I'd like to hear the details about what the program does when you don't define -Xmx, because I want to make it as easy to use as possible) then please allow some overhead.
When I requested 16GB and did not specify -Xmx, the program matched the requested memory,
java -ea -Xmx17720m -Xms17720m -cp
chiayi is offline   Reply With Quote
Old 12-16-2016, 07:25 PM   #18
Brian Bushnell
Super Moderator
 
Location: Walnut Creek, CA

Join Date: Jan 2014
Posts: 2,553
Default

Dear chiayi,

Is this data confidential, or is it possible for you to send it to me? I would really like to eliminate this kind of bug, but I'm not sure I can do it without the data that triggers the problem.
Brian Bushnell is offline   Reply With Quote
Old 12-17-2016, 06:25 AM   #19
chiayi
Member
 
Location: New York

Join Date: Dec 2016
Posts: 22
Default

I would love that. I will send the access to you in message momentarily. Thank you so much!
chiayi is offline   Reply With Quote
Old 12-17-2016, 11:41 PM   #20
Brian Bushnell
Super Moderator
 
Location: Walnut Creek, CA

Join Date: Jan 2014
Posts: 2,553
Default

OK, I figured out the problem. Clumpify initially decides whether or not to split the file into multiple groups depending on whether it will fit into memory. The memory estimation was taking into account gzip compression, but NOT bz2 compression. That's a very easy fix.

That only solves the last error (java.lang.OutOfMemoryError: GC overhead limit exceeded), not the first two. But once I fix it the last command will run fine.

As for the first two errors, it looks like those are due to your cluster configuration being to aggressive about killing jobs; I recommend any type of job for which you see that message (which might just be limited to running BBTools on bz2 files) you use the -Xmx parameter with slightly under half of the ram you requested (e.g. -Xmx7g when requesting 16 GB).

Last edited by Brian Bushnell; 12-18-2016 at 09:03 AM.
Brian Bushnell is offline   Reply With Quote
Reply

Tags
bbduk, bbmap, bbmerge, clumpify, compression, pigz, reformat, tadpole

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off




All times are GMT -8. The time now is 05:49 AM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2017, vBulletin Solutions, Inc.
Single Sign On provided by vBSSO