SEQanswers

Go Back   SEQanswers > Bioinformatics > Bioinformatics



Similar Threads
Thread Thread Starter Forum Replies Last Post
SAMTOOLS mpileup crash michalkovac Bioinformatics 14 01-16-2017 09:37 AM
Ray crash: how to debug? krobison De novo discovery 3 06-05-2012 09:50 AM
De Novo Assembly using Ray Farhat De novo discovery 18 05-23-2012 01:19 PM
RE: Geeting from Ray L RayLangley Introductions 2 06-20-2011 08:56 AM
Strange Tophat crash sphil Bioinformatics 5 06-07-2011 01:55 AM

Reply
 
Thread Tools
Old 05-01-2012, 03:31 PM   #1
krobison
Senior Member
 
Location: Boston area

Join Date: Nov 2007
Posts: 747
Default New Ray crash

Me again -- new crash with Ray. I'm running it at Amazon EC2 on a High-Memory Quadruple Extra Large instance (26 compute units; 68.4Gb of memory). I'm wondering if (a) I somehow ran out of memory and (b) whether I should use a smaller "-np" parameter (current 26, the number of processors) to try to fix this.

The run was initiated with the following command. The sequences are Illumina 100bp paired end sequences pre-processed with FLASH; because of the way the libraries were constructed most sequences in the single read file (wgc339.extendedFrags.fastq)

Code:
mpirun -np 24 /home/ec2-user/Ray/Ray -p wgc339.notCombined_1.fastq wgc339.notCombined_2.fastq -s wgc339.extendedFrags.fastq -o wgc339-ray 1> ray.wgc339.out 2> ray.wgc339.err
The message in the file from STDERR is

Code:
--------------------------------------------------------------------------
mpirun noticed that process rank 3 with PID 30808 on node ip-10-136-61-52 exited on signal 9 (Killed).
--------------------------------------------------------------------------
The bottom of the STDOUT redirect is:
Code:
Rank 9 is computing vertices & edges [3370001/5089855]
Speed RAY_SLAVE_MODE_EXTRACT_VERTICES 51 units/second
Estimated remaining time for this step: 9 hours, 22 minutes, 2 seconds
Rank 3 is computing vertices & edges [3360001/5089855]
Speed RAY_SLAVE_MODE_EXTRACT_VERTICES 41 units/second
Estimated remaining time for this step: 11 hours, 43 minutes, 11 seconds
Rank 18 is computing vertices & edges [3360001/5089855]
Speed RAY_SLAVE_MODE_EXTRACT_VERTICES 44 units/second
Estimated remaining time for this step: 10 hours, 55 minutes, 14 seconds
Rank 6 is computing vertices & edges [3350001/5089855]
Speed RAY_SLAVE_MODE_EXTRACT_VERTICES 51 units/second
Estimated remaining time for this step: 9 hours, 28 minutes, 34 seconds
Rank 14 is computing vertices & edges [3350001/5089855]
Speed RAY_SLAVE_MODE_EXTRACT_VERTICES 40 units/second
Estimated remaining time for this step: 12 hours, 4 minutes, 56 seconds
Rank 17 is computing vertices & edges [3360001/5089855]
Speed RAY_SLAVE_MODE_EXTRACT_VERTICES 48 units/second
Estimated remaining time for this step: 10 hours, 38 seconds
Rank 6 has 60400000 vertices
Rank 6: assembler memory usage: 2643428 KiB
the last messages on Rank 3 from STDOUT were
Code:
Rank 3 has 60400000 vertices
Rank 3: assembler memory usage: 2643424 KiB
Rank 3 is computing vertices & edges [3350001/5089855]
Rank 3 is computing vertices & edges [3360001/5089855]
Does mpirun leave some informative log files I should be checking?

thanks in advance for any guidance
krobison is offline   Reply With Quote
Old 06-05-2012, 08:36 AM   #2
seb567
Senior Member
 
Location: Québec, Canada

Join Date: Jul 2008
Posts: 260
Default

Hello !

It is nice to see people using Ray in the cloud !


According to Amazon Web Services LLC, the specification of the instance you are using is:

High-Memory Quadruple Extra Large Instance
  • 68.4 GB of memory
  • 26 EC2 Compute Units (8 virtual cores with 3.25 EC2 Compute Units each)
  • 1690 GB of instance storage
  • 64-bit platform
  • I/O Performance: High
  • API name: m2.4xlarge

If you do a less /proc/cpuinfo, you will see 8 processor cores, not 26.
Keep in mind that Amazon instances are virtual.

Therefore, I suspect that some error occured in the hypervisor supervising the virtual machine or your instance as the load was too high (a load of 24 for 8 processor cores is too high).

Launching 24 Ray processes on a 8-core virtual machine will result in over-subscription of cores. This means that you had 3 Ray processes per available processor core. This will cause a lot of context switches.

In the Ray journal, we can see this because the speed of the step called RAY_SLAVE_MODE_EXTRACT_VERTICES is only 51 units/second.
This speed should be way above 1000. Depending on your read length, this speed can reach 3000-4000 units/second per processor core.


You should therefore try with -n 8. You can also allocate several (let'S say 4) instances and launch Ray with mpiexec -n 24 using the 4 instances.

This process is documented on my blog.


I hope this is helpful for you and the community.
seb567 is offline   Reply With Quote
Old 06-05-2012, 10:32 AM   #3
krobison
Senior Member
 
Location: Boston area

Join Date: Nov 2007
Posts: 747
Default

Sebastien:

Thank you again for all your help with this. The differences between cores & threads is very useful to have disambiguated.

BTW, a handy way to manage clusters on AWS is StarCluster from MIT -- automates set-up, tear-down and resizing a cluster.
krobison is offline   Reply With Quote
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off




All times are GMT -8. The time now is 07:59 PM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2019, vBulletin Solutions, Inc.
Single Sign On provided by vBSSO