View Single Post
Old 12-16-2016, 08:12 PM   #17
Location: New York

Join Date: Dec 2016
Posts: 22

Thank you so much for the thorough explanation. I tried a couple things and please find the reports as follows:

Originally Posted by Brian Bushnell View Post
Personally, I consider this to be a major bug in the job schedulers that have this behavior. Also, not allowing programs to over-commit virtual memory (meaning, use more virtual memory than is physically present) is generally a very bad idea. Virtual memory is free, after all. What job scheduler are you using? And do you know what your cluster's policy is for over-comitting virtual memory?
I'm not sure about the answers to these two questions. I will need to ask around and get back to you.

Anyway, please try with requesting 48GB and using the -Xmx12g flag (or alternately requesting 16GB and using -Xmx4g) and let me know if that resolves the problem.
Still ran out of memory with 48GB r'q and -Xmx12g tag:

HTML Code:
java.lang.OutOfMemoryError: GC overhead limit exceeded
    at stream.FASTQ.makeId(
    at stream.FASTQ.quadToRead(
    at stream.FASTQ.toReadList(
    at stream.FastqReadInputStream.fillBuffer(
    at stream.FastqReadInputStream.nextList(
    at stream.ConcurrentGenericReadInputStream$ReadThread.readLists(
    at stream.ConcurrentGenericReadInputStream$

This program ran out of memory.
Try increasing the -Xmx flag and using tool-specific memory-related parameters.
So if you need to set -Xmx manually because the memory autodetection does not work (in which case, I'd like to hear the details about what the program does when you don't define -Xmx, because I want to make it as easy to use as possible) then please allow some overhead.
When I requested 16GB and did not specify -Xmx, the program matched the requested memory,
java -ea -Xmx17720m -Xms17720m -cp
chiayi is offline   Reply With Quote