I have a question about memory use by the new version of Tophat (v2.0.4). We ran the previous versions of Tophat on our cluster over 8 threads with 2 GB RAM per thread (16 GB total) with no problems. In fact, the systems admin think it was only actually using 10GB despite 16GB being available. Since we upgraded to the new version we have found that Tophat runs out of memory during the merging of the BAM files stage. We've tried several solutions, including increasing RAM to 4GB per thread over 8 threads, but the only solution which has worked is running over 4 threads and requesting 8GM memory per thread (i.e. 32 GB total). Whilst this solves our problem, it is quite heavy in terms of requesting memory from the cluster for our jobs. We are also not completely sure / convinced that Tophat is using all the memory we have requested.
I wonder if anyone has experienced this kind of problem with the new version, or can offer any tips or suggestions which may help. Also, does anyone know why the new version of Tophat is so memory heavy, more so than the older versions?
For info, the error message we were getting was:
Also for info, we are mapping 50bp single end Illumina reads to the human genome using the iGenomes reference files. The use of Bowtie1 or Bowtie2 within Tophat doesn't make any difference - both run out of memory with the same error message.
Thanks, Helen
I wonder if anyone has experienced this kind of problem with the new version, or can offer any tips or suggestions which may help. Also, does anyone know why the new version of Tophat is so memory heavy, more so than the older versions?
For info, the error message we were getting was:
PHP Code:
[FAILED]
Error: [Errno 12] Cannot allocate memory
Found 123034 junctions from happy spliced reads
Thanks, Helen
Comment