SEQanswers

Go Back   SEQanswers > Applications Forums > RNA Sequencing



Similar Threads
Thread Thread Starter Forum Replies Last Post
What should I do when bowtie says "memory could not be read"? madhuk11 Bioinformatics 0 12-01-2011 04:06 AM
The position file formats ".clocs" and "_pos.txt"? Ist there any difference? elgor Illumina/Solexa 0 06-27-2011 07:55 AM
yet another "Error: segment-based junction search failed with err = -9" liux Bioinformatics 1 08-24-2010 09:48 AM
"Systems biology and administration" & "Genome generation: no engineering allowed" seb567 Bioinformatics 0 05-25-2010 12:19 PM
SEQanswers second "publication": "How to map billions of short reads onto genomes" ECO Literature Watch 0 06-29-2009 11:49 PM

Reply
 
Thread Tools
Old 11-22-2011, 06:46 AM   #1
biznatch
Senior Member
 
Location: Canada

Join Date: Nov 2010
Posts: 124
Default Tophat memory usage during "Searching for junctions via segment mapping"

I'm using Tophat to align ~33 million 100 bp paired end reads to the mouse genome using -p 8 and and when it gets about 1 hour into "Searching for junctions via segment mapping" memory usage jumps from a few GB to almost 20 GB which exceeds the 16GB on my computer and starts using the swap file. I'm also using the "butterfly-search" which the Tophat manual says will slow slow down the alignment but doesn't say it should use more memory. Is this amount of memory usage normal or did something go wrong?
biznatch is offline   Reply With Quote
Old 04-29-2012, 09:22 AM   #2
robinpsmith
Junior Member
 
Location: San Francisco

Join Date: Apr 2010
Posts: 5
Default Same problem

I'm running Tophat2 on two 389MB paired end reads files, on 8 cores with 2.5G vmem per core (=20GB total) in SGE, using 16 threads.

The alignments go swimmingly (1 hour), then it takes ~19.7G of memory and 5+ hours to run the "Searching for junctions via segment mapping" step (my understanding is that the multiple threads are not used here). Then it makes it to "Mapping left_kept_reads_seg1 against segment_juncs with Bowtie2 (1/4)", but gets booted from the queue, probably due to memory usage.

If I run it with 4 cores, 16 threads, it gets booted at the "Searching for junctions via segment mapping" step (4 cores = only 10GB of memory).

I'm willing to go up to 10-12 cores, but looking for advice first. Any you care to provide would be greatly appreciated.

Thanks,

Robin
robinpsmith is offline   Reply With Quote
Old 05-01-2012, 10:40 AM   #3
yingzhang
Junior Member
 
Location: Minneapolis

Join Date: Feb 2012
Posts: 9
Default A slightly different memory issue

I am test driving TopHat 2.0 on a linux cluster.

I prepared 4 sample sets, which contain 13 million to 18 million 50bp paired reads. Tophat was called to process them one by one. I initialized the program with 8GB memory and 8 threads.

The first dataset was processed successfully using almost 2 hr. However, when the second dataset reached the step of "Building Bowtie index from genes.fa", the memory usage went above 18GB, so the system admin terminated my job.

So I re-started my job to only run TopHat 2.0 on dataset 2 using the same system setting. The job finished flawlessly.

I am wondering whether TopHat 2.0 has some memory issues.
yingzhang is offline   Reply With Quote
Old 05-01-2012, 10:42 AM   #4
robinpsmith
Junior Member
 
Location: San Francisco

Join Date: Apr 2010
Posts: 5
Default Problem somewhat solved

I found that removing the --fusion-search option substantially reduced memory usage. Not sure if this helps, but good luck.
robinpsmith is offline   Reply With Quote
Old 05-01-2012, 10:49 AM   #5
yingzhang
Junior Member
 
Location: Minneapolis

Join Date: Feb 2012
Posts: 9
Default

I didn't use the option "--fusion-search". Based on the manual, I think the option is turned off by default.
yingzhang is offline   Reply With Quote
Old 11-02-2012, 10:22 AM   #6
ramma
Member
 
Location: Washington

Join Date: Jun 2012
Posts: 16
Default

Did anyone find a solution to this?

I'm mapping about 30 million 50bp reads to the human genome and during the "searching for junctions via segment mapping" step my memory usage sky rockets. About 2-4GB of memory is used before, then 24-25GB of memory is used and the run finally fails at the "Joining segment hits" step by exceeding my 32GB of memory. This is with the latest tophat2 and bowtie2.

Thanks for any help.
ramma is offline   Reply With Quote
Old 11-05-2012, 08:14 AM   #7
robinpsmith
Junior Member
 
Location: San Francisco

Join Date: Apr 2010
Posts: 5
Default

Are you just using the vanilla options, or getting complicated? My tophat2 command that worked for is:

tophat2 -p 8 -m 2 -r 69 --mate-std-dev 200 -G genes.gtf -o ./out hg19 sample1_1.gz sample1_2.gz

But it does use a lot of memory - I think this one maxed at 16GB.
robinpsmith is offline   Reply With Quote
Old 11-05-2012, 09:15 AM   #8
ramma
Member
 
Location: Washington

Join Date: Jun 2012
Posts: 16
Default

I'm doing mostly default options. My tophat command looks something like this:

tophat2 -p 10 --coverage-search --microexon-search -o /path/to/output /path/to/input1 /path/to/input2
ramma is offline   Reply With Quote
Old 02-18-2013, 12:29 AM   #9
pettervikman
Member
 
Location: Sweden

Join Date: Nov 2009
Posts: 23
Default

Any more info regarding this? I just realised that my first 14 jobs that use --coverage-search --microexon-search have gone into junction search and they have started gobbling up memory (16-20Gb per sample). If they arnt finished when the next batch of 14 or even worse third batch of 14 comes to the same position I'm not certain that the servers got enough memory...
pettervikman is offline   Reply With Quote
Old 02-18-2013, 09:47 AM   #10
ramma
Member
 
Location: Washington

Join Date: Jun 2012
Posts: 16
Default

Quote:
Originally Posted by pettervikman View Post
Any more info regarding this? I just realised that my first 14 jobs that use --coverage-search --microexon-search have gone into junction search and they have started gobbling up memory (16-20Gb per sample). If they arnt finished when the next batch of 14 or even worse third batch of 14 comes to the same position I'm not certain that the servers got enough memory...
I've stopped using --coverage-search and --microexon-search on my personal server (32GB ram) as more than a single job at a time uses all the memory and kills itself. With those options disabled I've been able to run about 4-6 jobs at a time with no memory issues.
ramma is offline   Reply With Quote
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off




All times are GMT -8. The time now is 01:11 PM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2021, vBulletin Solutions, Inc.
Single Sign On provided by vBSSO