Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • New Ray crash

    Me again -- new crash with Ray. I'm running it at Amazon EC2 on a High-Memory Quadruple Extra Large instance (26 compute units; 68.4Gb of memory). I'm wondering if (a) I somehow ran out of memory and (b) whether I should use a smaller "-np" parameter (current 26, the number of processors) to try to fix this.

    The run was initiated with the following command. The sequences are Illumina 100bp paired end sequences pre-processed with FLASH; because of the way the libraries were constructed most sequences in the single read file (wgc339.extendedFrags.fastq)

    Code:
    mpirun -np 24 /home/ec2-user/Ray/Ray -p wgc339.notCombined_1.fastq wgc339.notCombined_2.fastq -s wgc339.extendedFrags.fastq -o wgc339-ray 1> ray.wgc339.out 2> ray.wgc339.err
    The message in the file from STDERR is

    Code:
    --------------------------------------------------------------------------
    mpirun noticed that process rank 3 with PID 30808 on node ip-10-136-61-52 exited on signal 9 (Killed).
    --------------------------------------------------------------------------
    The bottom of the STDOUT redirect is:
    Code:
    Rank 9 is computing vertices & edges [3370001/5089855]
    Speed RAY_SLAVE_MODE_EXTRACT_VERTICES 51 units/second
    Estimated remaining time for this step: 9 hours, 22 minutes, 2 seconds
    Rank 3 is computing vertices & edges [3360001/5089855]
    Speed RAY_SLAVE_MODE_EXTRACT_VERTICES 41 units/second
    Estimated remaining time for this step: 11 hours, 43 minutes, 11 seconds
    Rank 18 is computing vertices & edges [3360001/5089855]
    Speed RAY_SLAVE_MODE_EXTRACT_VERTICES 44 units/second
    Estimated remaining time for this step: 10 hours, 55 minutes, 14 seconds
    Rank 6 is computing vertices & edges [3350001/5089855]
    Speed RAY_SLAVE_MODE_EXTRACT_VERTICES 51 units/second
    Estimated remaining time for this step: 9 hours, 28 minutes, 34 seconds
    Rank 14 is computing vertices & edges [3350001/5089855]
    Speed RAY_SLAVE_MODE_EXTRACT_VERTICES 40 units/second
    Estimated remaining time for this step: 12 hours, 4 minutes, 56 seconds
    Rank 17 is computing vertices & edges [3360001/5089855]
    Speed RAY_SLAVE_MODE_EXTRACT_VERTICES 48 units/second
    Estimated remaining time for this step: 10 hours, 38 seconds
    Rank 6 has 60400000 vertices
    Rank 6: assembler memory usage: 2643428 KiB
    the last messages on Rank 3 from STDOUT were
    Code:
    Rank 3 has 60400000 vertices
    Rank 3: assembler memory usage: 2643424 KiB
    Rank 3 is computing vertices & edges [3350001/5089855]
    Rank 3 is computing vertices & edges [3360001/5089855]
    Does mpirun leave some informative log files I should be checking?

    thanks in advance for any guidance

  • #2
    Hello !

    It is nice to see people using Ray in the cloud !


    According to Amazon Web Services LLC, the specification of the instance you are using is:

    High-Memory Quadruple Extra Large Instance
    • 68.4 GB of memory
    • 26 EC2 Compute Units (8 virtual cores with 3.25 EC2 Compute Units each)
    • 1690 GB of instance storage
    • 64-bit platform
    • I/O Performance: High
    • API name: m2.4xlarge


    If you do a less /proc/cpuinfo, you will see 8 processor cores, not 26.
    Keep in mind that Amazon instances are virtual.

    Therefore, I suspect that some error occured in the hypervisor supervising the virtual machine or your instance as the load was too high (a load of 24 for 8 processor cores is too high).

    Launching 24 Ray processes on a 8-core virtual machine will result in over-subscription of cores. This means that you had 3 Ray processes per available processor core. This will cause a lot of context switches.

    In the Ray journal, we can see this because the speed of the step called RAY_SLAVE_MODE_EXTRACT_VERTICES is only 51 units/second.
    This speed should be way above 1000. Depending on your read length, this speed can reach 3000-4000 units/second per processor core.


    You should therefore try with -n 8. You can also allocate several (let'S say 4) instances and launch Ray with mpiexec -n 24 using the 4 instances.

    This process is documented on my blog.


    I hope this is helpful for you and the community.

    Comment


    • #3
      Sebastien:

      Thank you again for all your help with this. The differences between cores & threads is very useful to have disambiguated.

      BTW, a handy way to manage clusters on AWS is StarCluster from MIT -- automates set-up, tear-down and resizing a cluster.

      Comment

      Latest Articles

      Collapse

      • seqadmin
        Current Approaches to Protein Sequencing
        by seqadmin


        Proteins are often described as the workhorses of the cell, and identifying their sequences is key to understanding their role in biological processes and disease. Currently, the most common technique used to determine protein sequences is mass spectrometry. While still a valuable tool, mass spectrometry faces several limitations and requires a highly experienced scientist familiar with the equipment to operate it. Additionally, other proteomic methods, like affinity assays, are constrained...
        04-04-2024, 04:25 PM
      • seqadmin
        Strategies for Sequencing Challenging Samples
        by seqadmin


        Despite advancements in sequencing platforms and related sample preparation technologies, certain sample types continue to present significant challenges that can compromise sequencing results. Pedro Echave, Senior Manager of the Global Business Segment at Revvity, explained that the success of a sequencing experiment ultimately depends on the amount and integrity of the nucleic acid template (RNA or DNA) obtained from a sample. “The better the quality of the nucleic acid isolated...
        03-22-2024, 06:39 AM

      ad_right_rmr

      Collapse

      News

      Collapse

      Topics Statistics Last Post
      Started by seqadmin, 04-11-2024, 12:08 PM
      0 responses
      22 views
      0 likes
      Last Post seqadmin  
      Started by seqadmin, 04-10-2024, 10:19 PM
      0 responses
      24 views
      0 likes
      Last Post seqadmin  
      Started by seqadmin, 04-10-2024, 09:21 AM
      0 responses
      20 views
      0 likes
      Last Post seqadmin  
      Started by seqadmin, 04-04-2024, 09:00 AM
      0 responses
      52 views
      0 likes
      Last Post seqadmin  
      Working...
      X