![]() |
|
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
What should I do when bowtie says "memory could not be read"? | madhuk11 | Bioinformatics | 0 | 12-01-2011 04:06 AM |
The position file formats ".clocs" and "_pos.txt"? Ist there any difference? | elgor | Illumina/Solexa | 0 | 06-27-2011 07:55 AM |
yet another "Error: segment-based junction search failed with err = -9" | liux | Bioinformatics | 1 | 08-24-2010 09:48 AM |
"Systems biology and administration" & "Genome generation: no engineering allowed" | seb567 | Bioinformatics | 0 | 05-25-2010 12:19 PM |
SEQanswers second "publication": "How to map billions of short reads onto genomes" | ECO | Literature Watch | 0 | 06-29-2009 11:49 PM |
![]() |
|
Thread Tools |
![]() |
#1 |
Senior Member
Location: Canada Join Date: Nov 2010
Posts: 124
|
![]()
I'm using Tophat to align ~33 million 100 bp paired end reads to the mouse genome using -p 8 and and when it gets about 1 hour into "Searching for junctions via segment mapping" memory usage jumps from a few GB to almost 20 GB which exceeds the 16GB on my computer and starts using the swap file. I'm also using the "butterfly-search" which the Tophat manual says will slow slow down the alignment but doesn't say it should use more memory. Is this amount of memory usage normal or did something go wrong?
|
![]() |
![]() |
![]() |
#2 |
Junior Member
Location: San Francisco Join Date: Apr 2010
Posts: 5
|
![]()
I'm running Tophat2 on two 389MB paired end reads files, on 8 cores with 2.5G vmem per core (=20GB total) in SGE, using 16 threads.
The alignments go swimmingly (1 hour), then it takes ~19.7G of memory and 5+ hours to run the "Searching for junctions via segment mapping" step (my understanding is that the multiple threads are not used here). Then it makes it to "Mapping left_kept_reads_seg1 against segment_juncs with Bowtie2 (1/4)", but gets booted from the queue, probably due to memory usage. If I run it with 4 cores, 16 threads, it gets booted at the "Searching for junctions via segment mapping" step (4 cores = only 10GB of memory). I'm willing to go up to 10-12 cores, but looking for advice first. Any you care to provide would be greatly appreciated. Thanks, Robin |
![]() |
![]() |
![]() |
#3 |
Junior Member
Location: Minneapolis Join Date: Feb 2012
Posts: 9
|
![]()
I am test driving TopHat 2.0 on a linux cluster.
I prepared 4 sample sets, which contain 13 million to 18 million 50bp paired reads. Tophat was called to process them one by one. I initialized the program with 8GB memory and 8 threads. The first dataset was processed successfully using almost 2 hr. However, when the second dataset reached the step of "Building Bowtie index from genes.fa", the memory usage went above 18GB, so the system admin terminated my job. So I re-started my job to only run TopHat 2.0 on dataset 2 using the same system setting. The job finished flawlessly. I am wondering whether TopHat 2.0 has some memory issues. |
![]() |
![]() |
![]() |
#4 |
Junior Member
Location: San Francisco Join Date: Apr 2010
Posts: 5
|
![]()
I found that removing the --fusion-search option substantially reduced memory usage. Not sure if this helps, but good luck.
|
![]() |
![]() |
![]() |
#5 |
Junior Member
Location: Minneapolis Join Date: Feb 2012
Posts: 9
|
![]()
I didn't use the option "--fusion-search". Based on the manual, I think the option is turned off by default.
|
![]() |
![]() |
![]() |
#6 |
Member
Location: Washington Join Date: Jun 2012
Posts: 16
|
![]()
Did anyone find a solution to this?
I'm mapping about 30 million 50bp reads to the human genome and during the "searching for junctions via segment mapping" step my memory usage sky rockets. About 2-4GB of memory is used before, then 24-25GB of memory is used and the run finally fails at the "Joining segment hits" step by exceeding my 32GB of memory. This is with the latest tophat2 and bowtie2. Thanks for any help. |
![]() |
![]() |
![]() |
#7 |
Junior Member
Location: San Francisco Join Date: Apr 2010
Posts: 5
|
![]()
Are you just using the vanilla options, or getting complicated? My tophat2 command that worked for is:
tophat2 -p 8 -m 2 -r 69 --mate-std-dev 200 -G genes.gtf -o ./out hg19 sample1_1.gz sample1_2.gz But it does use a lot of memory - I think this one maxed at 16GB. |
![]() |
![]() |
![]() |
#8 |
Member
Location: Washington Join Date: Jun 2012
Posts: 16
|
![]()
I'm doing mostly default options. My tophat command looks something like this:
tophat2 -p 10 --coverage-search --microexon-search -o /path/to/output /path/to/input1 /path/to/input2 |
![]() |
![]() |
![]() |
#9 |
Member
Location: Sweden Join Date: Nov 2009
Posts: 23
|
![]()
Any more info regarding this? I just realised that my first 14 jobs that use --coverage-search --microexon-search have gone into junction search and they have started gobbling up memory (16-20Gb per sample). If they arnt finished when the next batch of 14 or even worse third batch of 14 comes to the same position I'm not certain that the servers got enough memory...
|
![]() |
![]() |
![]() |
#10 | |
Member
Location: Washington Join Date: Jun 2012
Posts: 16
|
![]() Quote:
|
|
![]() |
![]() |
![]() |
Thread Tools | |
|
|