Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • New Ray crash

    Me again -- new crash with Ray. I'm running it at Amazon EC2 on a High-Memory Quadruple Extra Large instance (26 compute units; 68.4Gb of memory). I'm wondering if (a) I somehow ran out of memory and (b) whether I should use a smaller "-np" parameter (current 26, the number of processors) to try to fix this.

    The run was initiated with the following command. The sequences are Illumina 100bp paired end sequences pre-processed with FLASH; because of the way the libraries were constructed most sequences in the single read file (wgc339.extendedFrags.fastq)

    Code:
    mpirun -np 24 /home/ec2-user/Ray/Ray -p wgc339.notCombined_1.fastq wgc339.notCombined_2.fastq -s wgc339.extendedFrags.fastq -o wgc339-ray 1> ray.wgc339.out 2> ray.wgc339.err
    The message in the file from STDERR is

    Code:
    --------------------------------------------------------------------------
    mpirun noticed that process rank 3 with PID 30808 on node ip-10-136-61-52 exited on signal 9 (Killed).
    --------------------------------------------------------------------------
    The bottom of the STDOUT redirect is:
    Code:
    Rank 9 is computing vertices & edges [3370001/5089855]
    Speed RAY_SLAVE_MODE_EXTRACT_VERTICES 51 units/second
    Estimated remaining time for this step: 9 hours, 22 minutes, 2 seconds
    Rank 3 is computing vertices & edges [3360001/5089855]
    Speed RAY_SLAVE_MODE_EXTRACT_VERTICES 41 units/second
    Estimated remaining time for this step: 11 hours, 43 minutes, 11 seconds
    Rank 18 is computing vertices & edges [3360001/5089855]
    Speed RAY_SLAVE_MODE_EXTRACT_VERTICES 44 units/second
    Estimated remaining time for this step: 10 hours, 55 minutes, 14 seconds
    Rank 6 is computing vertices & edges [3350001/5089855]
    Speed RAY_SLAVE_MODE_EXTRACT_VERTICES 51 units/second
    Estimated remaining time for this step: 9 hours, 28 minutes, 34 seconds
    Rank 14 is computing vertices & edges [3350001/5089855]
    Speed RAY_SLAVE_MODE_EXTRACT_VERTICES 40 units/second
    Estimated remaining time for this step: 12 hours, 4 minutes, 56 seconds
    Rank 17 is computing vertices & edges [3360001/5089855]
    Speed RAY_SLAVE_MODE_EXTRACT_VERTICES 48 units/second
    Estimated remaining time for this step: 10 hours, 38 seconds
    Rank 6 has 60400000 vertices
    Rank 6: assembler memory usage: 2643428 KiB
    the last messages on Rank 3 from STDOUT were
    Code:
    Rank 3 has 60400000 vertices
    Rank 3: assembler memory usage: 2643424 KiB
    Rank 3 is computing vertices & edges [3350001/5089855]
    Rank 3 is computing vertices & edges [3360001/5089855]
    Does mpirun leave some informative log files I should be checking?

    thanks in advance for any guidance

  • #2
    Hello !

    It is nice to see people using Ray in the cloud !


    According to Amazon Web Services LLC, the specification of the instance you are using is:

    High-Memory Quadruple Extra Large Instance
    • 68.4 GB of memory
    • 26 EC2 Compute Units (8 virtual cores with 3.25 EC2 Compute Units each)
    • 1690 GB of instance storage
    • 64-bit platform
    • I/O Performance: High
    • API name: m2.4xlarge


    If you do a less /proc/cpuinfo, you will see 8 processor cores, not 26.
    Keep in mind that Amazon instances are virtual.

    Therefore, I suspect that some error occured in the hypervisor supervising the virtual machine or your instance as the load was too high (a load of 24 for 8 processor cores is too high).

    Launching 24 Ray processes on a 8-core virtual machine will result in over-subscription of cores. This means that you had 3 Ray processes per available processor core. This will cause a lot of context switches.

    In the Ray journal, we can see this because the speed of the step called RAY_SLAVE_MODE_EXTRACT_VERTICES is only 51 units/second.
    This speed should be way above 1000. Depending on your read length, this speed can reach 3000-4000 units/second per processor core.


    You should therefore try with -n 8. You can also allocate several (let'S say 4) instances and launch Ray with mpiexec -n 24 using the 4 instances.

    This process is documented on my blog.


    I hope this is helpful for you and the community.

    Comment


    • #3
      Sebastien:

      Thank you again for all your help with this. The differences between cores & threads is very useful to have disambiguated.

      BTW, a handy way to manage clusters on AWS is StarCluster from MIT -- automates set-up, tear-down and resizing a cluster.

      Comment

      Latest Articles

      Collapse

      • seqadmin
        Recent Advances in Sequencing Analysis Tools
        by seqadmin


        The sequencing world is rapidly changing due to declining costs, enhanced accuracies, and the advent of newer, cutting-edge instruments. Equally important to these developments are improvements in sequencing analysis, a process that converts vast amounts of raw data into a comprehensible and meaningful form. This complex task requires expertise and the right analysis tools. In this article, we highlight the progress and innovation in sequencing analysis by reviewing several of the...
        Yesterday, 07:48 AM
      • seqadmin
        Essential Discoveries and Tools in Epitranscriptomics
        by seqadmin




        The field of epigenetics has traditionally concentrated more on DNA and how changes like methylation and phosphorylation of histones impact gene expression and regulation. However, our increased understanding of RNA modifications and their importance in cellular processes has led to a rise in epitranscriptomics research. “Epitranscriptomics brings together the concepts of epigenetics and gene expression,” explained Adrien Leger, PhD, Principal Research Scientist...
        04-22-2024, 07:01 AM

      ad_right_rmr

      Collapse

      News

      Collapse

      Topics Statistics Last Post
      Started by seqadmin, Today, 06:57 AM
      0 responses
      9 views
      0 likes
      Last Post seqadmin  
      Started by seqadmin, Yesterday, 07:17 AM
      0 responses
      13 views
      0 likes
      Last Post seqadmin  
      Started by seqadmin, 05-02-2024, 08:06 AM
      0 responses
      19 views
      0 likes
      Last Post seqadmin  
      Started by seqadmin, 04-30-2024, 12:17 PM
      0 responses
      23 views
      0 likes
      Last Post seqadmin  
      Working...
      X