Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Seq Fault when running on 12 or more processors

    I am getting a seg fault when running on 12,24, or 32 processors. This is output from a 32 processor job. I am using fastqs that are about 250GB each.
    Thoughts? It works for 4,8, and 6 processors, so far (job yet to finish). I did not test 10.

    [compute-2-1:17105] *** Process received signal ***
    [compute-2-1:17105] Signal: Segmentation fault (11)
    [compute-2-1:17105] Signal code: Address not mapped (1)
    [compute-2-1:17105] Failing at address: (nil)
    [compute-2-1:17105] [ 0] /lib64/libpthread.so.0 [0x331e40eb10]
    [compute-2-1:17105] [ 1] /lib64/libc.so.6(memcpy+0x15b) [0x331d87c24b]
    [compute-2-1:17105] [ 2] /home/galaxy/production/Sept06/galaxy-central/tool-deps/mpirun/1.4.3/lib/libmpi.so.0(ompi_convertor_unpack+0xae) [0x2b6a6db846ae]
    [compute-2-1:17105] [ 3] /home/galaxy/production/Sept06/galaxy-central/tool-deps/mpirun/1.4.3/lib/libmpi.so.0 [0x2b6a6dc1fc6e]
    [compute-2-1:17105] [ 4] /home/galaxy/production/Sept06/galaxy-central/tool-deps/mpirun/1.4.3/lib/libmpi.so.0 [0x2b6a6dc1cc56]
    [compute-2-1:17105] [ 5] /home/galaxy/production/Sept06/galaxy-central/tool-deps/mpirun/1.4.3/lib/libmpi.so.0 [0x2b6a6dbb6e38]
    [compute-2-1:17105] [ 6] /home/galaxy/production/Sept06/galaxy-central/tool-deps/mpirun/1.4.3/lib/libopen-pal.so.0(opal_progress+0x5a) [0x2b6a6e1a04ea]
    [compute-2-1:17105] [ 7] /home/galaxy/production/Sept06/galaxy-central/tool-deps/mpirun/1.4.3/lib/libmpi.so.0 [0x2b6a6db77135]
    [compute-2-1:17105] [ 8] /home/galaxy/production/Sept06/galaxy-central/tool-deps/mpirun/1.4.3/lib/libmpi.so.0 [0x2b6a6dbc5086]
    [compute-2-1:17105] [ 9] /home/galaxy/production/Sept06/galaxy-central/tool-deps/mpirun/1.4.3/lib/libmpi.so.0 [0x2b6a6dbc5737]
    [compute-2-1:17105] [10] /home/galaxy/production/Sept06/galaxy-central/tool-deps/mpirun/1.4.3/lib/libmpi.so.0 [0x2b6a6dbbb3d0]
    [compute-2-1:17105] [11] /home/galaxy/production/Sept06/galaxy-central/tool-deps/mpirun/1.4.3/lib/libmpi.so.0 [0x2b6a6dbcd3c9]
    [compute-2-1:17105] [12] /home/galaxy/production/Sept06/galaxy-central/tool-deps/mpirun/1.4.3/lib/libmpi.so.0(MPI_Bcast+0x171) [0x2b6a6db8be11]
    [compute-2-1:17105] [13] pBWA(bwt_restore_bwt+0x7c) [0x407fbc]
    [compute-2-1:17105] [14] pBWA(bwa_aln_core+0x81) [0x408b01]
    [compute-2-1:17105] [15] pBWA(bwa_aln+0x196) [0x409056]
    [compute-2-1:17105] [16] pBWA(main+0xec) [0x4281ac]
    [compute-2-1:17105] [17] /lib64/libc.so.6(__libc_start_main+0xf4) [0x331d81d994]
    [compute-2-1:17105] [18] pBWA [0x404b79]
    [compute-2-1:17105] *** End of error message ***
    --------------------------------------------------------------------------

    Comment


    • #32
      Hm... could you answer a couple of questions for me?

      1. What is your system's nodal information (# nodes, # cores per node, RAM per node).

      2. How are you splitting the jobs up (ie. how many processes are you trying to put on each node)?

      Comment


      • #33
        Originally posted by dp05yk View Post
        Hm... could you answer a couple of questions for me?

        1. What is your system's nodal information (# nodes, # cores per node, RAM per node).

        2. How are you splitting the jobs up (ie. how many processes are you trying to put on each node)?
        3 nodes, 24 hyperthreaded processors per node (12 actual). About 50G of ram per node.

        I am just submitting via a qsub to SGE and letting SGE decide on the distribution. With 32 preocessors I think I got 24 on one and 8 on the other. With 24 I got 23 on one and 1 on the other.

        Is it a RAM issue?

        Comment


        • #34
          Originally posted by ichorny View Post
          3 nodes, 24 hyperthreaded processors per node (12 actual). About 50G of ram per node.

          I am just submitting via a qsub to SGE and letting SGE decide on the distribution. With 32 preocessors I think I got 24 on one and 8 on the other. With 24 I got 23 on one and 1 on the other.

          Is it a RAM issue?
          It could possibly be a RAM issue... with MPI applications each instance of the program is completely separate from another. Ie. where threaded applications share global variables, MPI applications do not. So if pBWA requires x GB/RAM for 1 processor, it will require p*x GB/RAM for p processors... if you only have 50GB RAM/node and you're running 24 processes on said node, you're only allowing ~2.1GB RAM per process... that's cutting it mighty-fine.

          What you may want to try is combining multithreading and pBWA... use 24 processors (again), but tell the system to put 8 on each of your 3 nodes... then in your /pBWA aln command, use -n 3 to spawn 3 threads per node so you'll use all 72 of your cores... tell me how that works.

          Comment


          • #35
            I don't see the point of using mpi/parallelization for a process that is embarrassingly parallel. BWA has a great multi-threaded functionality and with samtools merging functionalitie is easy. So it is very easy to chunk the reads and run them multi-threaded then merge the bam files.

            Comment


            • #36
              Originally posted by dp05yk View Post
              It could possibly be a RAM issue... with MPI applications each instance of the program is completely separate from another. Ie. where threaded applications share global variables, MPI applications do not. So if pBWA requires x GB/RAM for 1 processor, it will require p*x GB/RAM for p processors... if you only have 50GB RAM/node and you're running 24 processes on said node, you're only allowing ~2.1GB RAM per process... that's cutting it mighty-fine.

              What you may want to try is combining multithreading and pBWA... use 24 processors (again), but tell the system to put 8 on each of your 3 nodes... then in your /pBWA aln command, use -n 3 to spawn 3 threads per node so you'll use all 72 of your cores... tell me how that works.
              does the samse/sampe support multi threading. There does not seem to be an option listed?

              Comment


              • #37
                Originally posted by rskr View Post
                I don't see the point of using mpi/parallelization for a process that is embarrassingly parallel. BWA has a great multi-threaded functionality and with samtools merging functionalitie is easy. So it is very easy to chunk the reads and run them multi-threaded then merge the bam files.
                Actually, BWA only has multi-threaded functionality for half of the process. sampe/samse is not multithreaded. Moreover, BWA's multithreading was inefficient when I initially released pBWA for anything more than ~8 threads (google it... there are multiple threads denoting this issue prior to the update). Upon release of pBWA, running pBWA with 24 processors was faster _just for aln_ than BWA was with 24 threads. Obviously pBWA was faster for sampe/samse. FYI - it was my edits that improved multithreading efficiency in BWA.

                You're right - this isn't an enormous breakthrough but it has its advantages. On the cluster I use it's much easier to get a large MPI job scheduled than hundreds of serial jobs... further increasing the usefulness of pBWA.

                Comment


                • #38
                  Originally posted by ichorny View Post
                  does the samse/sampe support multi threading. There does not seem to be an option listed?
                  Unfortunately not. This is why running pBWA is all about finding the right balance between multithreading and parallelism. If you ran (as I suggested) 24 processes across 3 cores each with 3 threads, you'll need to just run sampe/samse with 24 processes, no threads. That should work just fine.

                  Comment


                  • #39
                    Now I am getting a seq fault in the sam pe part. This is using 8/12 cores on a machine with 50GB of memory. Thoughts?

                    Proc 1: [bwa_seq_open] seeked to 31248 in /home/galaxy/production/Sept06/galaxy-central/database/files/003/dataset_3168.dat
                    Proc 4: [bwa_seq_open] seeked to 124248 in /home/galaxy/production/Sept06/galaxy-central/database/files/003/dataset_3168.dat
                    Proc 3: [bwa_seq_open] seeked to 93248 in /home/galaxy/production/Sept06/galaxy-central/database/files/003/dataset_3168.dat
                    Proc 6: [bwa_seq_open] seeked to 186248 in /home/galaxy/production/Sept06/galaxy-central/database/files/003/dataset_3168.dat
                    Proc 3: [bwa_seq_open] seeked to 93248 in /home/galaxy/production/Sept06/galaxy-central/database/files/003/dataset_3169.dat
                    Proc 7: [bwa_seq_open] seeked to 217248 in /home/galaxy/production/Sept06/galaxy-central/database/files/003/dataset_3168.dat
                    Proc 7: [bwa_seq_open] seeked to 217248 in /home/galaxy/production/Sept06/galaxy-central/database/files/003/dataset_3169.dat
                    Proc 7: [skipToNextPairedRecord] found SDUS-BRUNO-106:1:0:4:21:5226:2182 SDUS-BRUNO-106:1:0:4:21:5226:2182
                    Proc 6: [bwa_seq_open] seeked to 186248 in /home/galaxy/production/Sept06/galaxy-central/database/files/003/dataset_3169.dat
                    Proc 6: [skipToNextPairedRecord] found SDUS-BRUNO-106:1:0:4:21:4608:2141 SDUS-BRUNO-106:1:0:4:21:4608:2141
                    Proc 7: [bwa_sai2sam_pe_core] 124 reads
                    Proc 7: [bwa_sai2sam_pe_core] convert to sequence coordinate...
                    Proc 6: [bwa_sai2sam_pe_core] 125 reads
                    Proc 6: [bwa_sai2sam_pe_core] convert to sequence coordinate...
                    Proc 0: [bwa_seq_open] seeked to 0 in /home/galaxy/production/Sept06/galaxy-central/database/files/003/dataset_3168.dat
                    Proc 0: [bwa_seq_open] seeked to 0 in /home/galaxy/production/Sept06/galaxy-central/database/files/003/dataset_3169.dat
                    Proc 2: [bwa_seq_open] seeked to 62248 in /home/galaxy/production/Sept06/galaxy-central/database/files/003/dataset_3168.dat
                    Proc 1: [bwa_seq_open] seeked to 31248 in /home/galaxy/production/Sept06/galaxy-central/database/files/003/dataset_3169.dat
                    Proc 2: [bwa_seq_open] seeked to 62248 in /home/galaxy/production/Sept06/galaxy-central/database/files/003/dataset_3169.dat
                    Proc 1: [skipToNextPairedRecord] found SDUS-BRUNO-106:1:0:4:21:1817:2166 SDUS-BRUNO-106:1:0:4:21:1817:2166
                    Proc 2: [skipToNextPairedRecord] found SDUS-BRUNO-106:1:0:4:21:2392:2241 SDUS-BRUNO-106:1:0:4:21:2392:2241
                    Proc 5: [bwa_seq_open] seeked to 155248 in /home/galaxy/production/Sept06/galaxy-central/database/files/003/dataset_3168.dat
                    Proc 5: [bwa_seq_open] seeked to 155248 in /home/galaxy/production/Sept06/galaxy-central/database/files/003/dataset_3169.dat
                    Proc 5: [skipToNextPairedRecord] found SDUS-BRUNO-106:1:0:4:21:4089:2188 SDUS-BRUNO-106:1:0:4:21:4089:2188
                    Proc 4: [bwa_seq_open] seeked to 124248 in /home/galaxy/production/Sept06/galaxy-central/database/files/003/dataset_3169.dat
                    Proc 3: [skipToNextPairedRecord] found SDUS-BRUNO-106:1:0:4:21:2841:2141 SDUS-BRUNO-106:1:0:4:21:2841:2141
                    Proc 4: [skipToNextPairedRecord] found SDUS-BRUNO-106:1:0:4:21:3574:2130 SDUS-BRUNO-106:1:0:4:21:3574:2130
                    Proc 3: [bwa_sai2sam_pe_core] 125 reads
                    Proc 3: [bwa_sai2sam_pe_core] convert to sequence coordinate...
                    Proc 4: [bwa_sai2sam_pe_core] 125 reads
                    Proc 4: [bwa_sai2sam_pe_core] convert to sequence coordinate...
                    Proc 1: [bwa_sai2sam_pe_core] 125 reads
                    Proc 1: [bwa_sai2sam_pe_core] convert to sequence coordinate...
                    Proc 2: [bwa_sai2sam_pe_core] 125 reads
                    Proc 2: [bwa_sai2sam_pe_core] convert to sequence coordinate...
                    Proc 5: [bwa_sai2sam_pe_core] 125 reads
                    Proc 5: [bwa_sai2sam_pe_core] convert to sequence coordinate...
                    Proc 0: [bwa_sai2sam_pe_core] 126 reads
                    Proc 0: [bwa_sai2sam_pe_core] convert to sequence coordinate...
                    Broadcasting BWT (this may take a while)... done!
                    Broadcasting SA... done!
                    Broadcasting BWT (this may take a while)... [compute-2-0:15546] *** Process received signal ***
                    [compute-2-0:15546] Signal: Segmentation fault (11)
                    [compute-2-0:15546] Signal code: Address not mapped (1)
                    [compute-2-0:15546] Failing at address: (nil)
                    [compute-2-0:15546] [ 0] /lib64/libpthread.so.0 [0x3b43e0eb10]
                    [compute-2-0:15546] [ 1] /lib64/libc.so.6(memcpy+0x15b) [0x3b4327c24b]
                    [compute-2-0:15546] [ 2] /home/galaxy/production/Sept06/galaxy-central/tool-deps/mpirun/1.4.3/lib/libmpi.so.0(ompi_convertor_unpack+0xae) [0x2b904780c6ae]
                    [compute-2-0:15546] [ 3] /home/galaxy/production/Sept06/galaxy-central/tool-deps/mpirun/1.4.3/lib/libmpi.so.0 [0x2b90478a7c6e]
                    [compute-2-0:15546] [ 4] /home/galaxy/production/Sept06/galaxy-central/tool-deps/mpirun/1.4.3/lib/libmpi.so.0 [0x2b90478a4c56]
                    [compute-2-0:15546] [ 5] /home/galaxy/production/Sept06/galaxy-central/tool-deps/mpirun/1.4.3/lib/libmpi.so.0 [0x2b904783ee38]
                    [compute-2-0:15546] [ 6] /home/galaxy/production/Sept06/galaxy-central/tool-deps/mpirun/1.4.3/lib/libopen-pal.so.0(opal_progress+0x5a) [0x2b9047e284ea]
                    [compute-2-0:15546] [ 7] /home/galaxy/production/Sept06/galaxy-central/tool-deps/mpirun/1.4.3/lib/libmpi.so.0 [0x2b90477ff135]
                    [compute-2-0:15546] [ 8] /home/galaxy/production/Sept06/galaxy-central/tool-deps/mpirun/1.4.3/lib/libmpi.so.0 [0x2b904784d086]
                    [compute-2-0:15546] [ 9] /home/galaxy/production/Sept06/galaxy-central/tool-deps/mpirun/1.4.3/lib/libmpi.so.0 [0x2b904784d737]
                    [compute-2-0:15546] [10] /home/galaxy/production/Sept06/galaxy-central/tool-deps/mpirun/1.4.3/lib/libmpi.so.0 [0x2b90478433d0]
                    [compute-2-0:15546] [11] /home/galaxy/production/Sept06/galaxy-central/tool-deps/mpirun/1.4.3/lib/libmpi.so.0 [0x2b90478553c9]
                    [compute-2-0:15546] [12] /home/galaxy/production/Sept06/galaxy-central/tool-deps/mpirun/1.4.3/lib/libmpi.so.0(MPI_Bcast+0x171) [0x2b9047813e11]
                    [compute-2-0:15546] [13] pBWA(bwt_restore_bwt+0x7c) [0x407fbc]
                    [compute-2-0:15546] [14] pBWA(bwa_cal_pac_pos_pe+0x1b8b) [0x41ad4b]
                    [compute-2-0:15546] [15] pBWA(bwa_sai2sam_pe_core+0x3af) [0x41b1bf]
                    [compute-2-0:15546] [16] pBWA(bwa_sai2sam_pe+0x415) [0x41be45]
                    [compute-2-0:15546] [17] pBWA(main+0x96) [0x428156]
                    [compute-2-0:15546] [18] /lib64/libc.so.6(__libc_start_main+0xf4) [0x3b4321d994]
                    [compute-2-0:15546] [19] pBWA [0x404b79]
                    [compute-2-0:15546] *** End of error message ***
                    --------------------------------------------------------------------------
                    mpirun noticed that process rank 3 with PID 15546 on node compute-2-0.local exited on signal 11 (Segmentation fault).

                    Comment


                    • #40
                      Given it's the same error message in the same function, I'm going to say RAM again - sampe/samse require more RAM than aln, because sampe/samse require every processor to have the entire suffix array (hence the 'broadcasting SA') as well as the BWT.

                      Just play around with different parallel/threaded combinations... eventually you will find the optimal combination for your system.

                      EDIT: just realized you were only using 8 processors... perhaps there are other users utilizing RAM on your cluster?
                      Last edited by dp05yk; 09-16-2011, 04:18 AM.

                      Comment


                      • #41
                        New Error. The alignment works on smaller number of processors but all on the same node. This job was run across nodes. Accroding to the log it finished. Thoughts?

                        Error:
                        The pBWA alignment failed.
                        The output file is empty. You may simply have no matches, or there may be an error with your input file or settings.

                        End of Log File:
                        Proc 3: [mergeFilesIntoOne] Finished merge in 2.75 secs
                        Proc 11: [mergeFilesIntoOne] Finished merge in 2.67 secs
                        Proc 0: [mergeFilesIntoOne] Finished merge in 2.82 secs
                        Proc 1: [mergeFilesIntoOne] Finished merge in 2.68 secs
                        Proc 7: [mergeFilesIntoOne] Finished merge in 2.72 secs
                        Proc 4: [mergeFilesIntoOne] Finished merge in 2.74 secs
                        Proc 8: [mergeFilesIntoOne] Finished merge in 2.83 secs
                        Proc 9: [mergeFilesIntoOne] Finished merge in 2.67 secs
                        Proc 2: [mergeFilesIntoOne] Finished merge in 2.68 secs
                        Proc 5: [mergeFilesIntoOne] Finished merge in 2.72 secs
                        Proc 10: [mergeFilesIntoOne] Finished merge in 2.67 secs
                        Proc 6: [mergeFilesIntoOne] Finished merge in 2.77 secs

                        real 11m28.892s
                        user 21m12.224s
                        sys 20m3.094s

                        Comment


                        • #42
                          Hi, dp05yk!

                          I am a newbie. I analysed my data recently and found the sampe is so slow. Then I encountered pBWA and tried to use it to improve the analysis steps.

                          However, I got some problems.

                          What is the parameter 'NumReads' mean? And how can I get that number?

                          And when I ran the following codes, there was something wrong.

                          [wencanh@node9 pBWA]$ ./pBWA aln -t 10 -f /data/a.sai /data/hg19/human_g1k_v37.fasta.gz /data/lane2.R1.clean.fq.gz 100000
                          librdmacm: couldn't read ABI version.
                          librdmacm: assuming: 4
                          CMA: unable to get RDMA device list
                          --------------------------------------------------------------------------
                          [[12279,1],0]: A high-performance Open MPI point-to-point messaging module
                          was unable to find any relevant network interfaces:

                          Module: OpenFabrics (openib)
                          Host: node9

                          Another transport will be used instead, although this may result in
                          lower performance.
                          --------------------------------------------------------------------------
                          [bwa_aln] 17bp reads: max_diff = 2
                          [bwa_aln] 38bp reads: max_diff = 3
                          [bwa_aln] 64bp reads: max_diff = 4
                          [bwa_aln] 93bp reads: max_diff = 5
                          [bwa_aln] 124bp reads: max_diff = 6
                          [bwa_aln] 157bp reads: max_diff = 7
                          [bwa_aln] 190bp reads: max_diff = 8
                          [bwa_aln] 225bp reads: max_diff = 9
                          Proc 0: [bwa_seq_open] seeked to 0 in /data/lane2.R1.clean.fq.gz
                          [bwa_seq_open] fail to open file '100000'. Abort!
                          [node9:29323] *** Process received signal ***
                          [node9:29323] Signal: Aborted (6)
                          [node9:29323] Signal code: (-6)
                          [node9:29323] [ 0] /lib64/libpthread.so.0 [0x33c400eb10]
                          [node9:29323] [ 1] /lib64/libc.so.6(gsignal+0x35) [0x33c3430265]
                          [node9:29323] [ 2] /lib64/libc.so.6(abort+0x110) [0x33c3431d10]
                          [node9:29323] [ 3] ./pBWA [0x404f0d]
                          [node9:29323] [ 4] ./pBWA(bwa_seq_open+0x62) [0x412792]
                          [node9:29323] [ 5] ./pBWA(bwa_aln+0x88b) [0x40974b]
                          [node9:29323] [ 6] ./pBWA(main+0xec) [0x4281ac]
                          [node9:29323] [ 7] /lib64/libc.so.6(__libc_start_main+0xf4) [0x33c341d994]
                          [node9:29323] [ 8] ./pBWA [0x404b79]
                          [node9:29323] *** End of error message ***
                          Aborted

                          And when I removed the NumReads, it seemed OK! But actually the file"a.sai" had nothing in it.

                          [wencanh@node9 pBWA]$ ./pBWA aln -t 10 -f /data/a.sai /data/hg19/human_g1k_v37.fasta.gz /data/lane2.R1.clean.fq.gz
                          librdmacm: couldn't read ABI version.
                          librdmacm: assuming: 4
                          CMA: unable to get RDMA device list
                          --------------------------------------------------------------------------
                          [[12253,1],0]: A high-performance Open MPI point-to-point messaging module
                          was unable to find any relevant network interfaces:

                          Module: OpenFabrics (openib)
                          Host: node9

                          Another transport will be used instead, although this may result in
                          lower performance.
                          --------------------------------------------------------------------------
                          [bwa_aln] 17bp reads: max_diff = 2
                          [bwa_aln] 38bp reads: max_diff = 3
                          [bwa_aln] 64bp reads: max_diff = 4
                          [bwa_aln] 93bp reads: max_diff = 5
                          [bwa_aln] 124bp reads: max_diff = 6
                          [bwa_aln] 157bp reads: max_diff = 7
                          [bwa_aln] 190bp reads: max_diff = 8
                          [bwa_aln] 225bp reads: max_diff = 9
                          Proc 0: [bwa_seq_open] seeked to 0 in /data/lane2.R1.clean.fq.gz
                          Broadcasting BWT (this may take a while)... done!
                          Broadcasting BWT (this may take a while)... done!
                          Proc 0: Total time taken: 3.88 sec

                          Comment


                          • #43
                            Does anyone know what to do with all the sam files I get from pBWA? Can I just concatenate them into one big sam file for downstream processing, or what do I do?
                            Thanks

                            Comment


                            • #44
                              Does pBWA work with multi-core machine with lots of RAM? I have a six core machine with 64GB RAM. Can it run in six cores for samse/sampe in my case?

                              Comment


                              • #45
                                Originally posted by dp05yk View Post
                                It could possibly be a RAM issue... with MPI applications each instance of the program is completely separate from another. Ie. where threaded applications share global variables, MPI applications do not. So if pBWA requires x GB/RAM for 1 processor, it will require p*x GB/RAM for p processors... if you only have 50GB RAM/node and you're running 24 processes on said node, you're only allowing ~2.1GB RAM per process... that's cutting it mighty-fine.

                                What you may want to try is combining multithreading and pBWA... use 24 processors (again), but tell the system to put 8 on each of your 3 nodes... then in your /pBWA aln command, use -n 3 to spawn 3 threads per node so you'll use all 72 of your cores... tell me how that works.
                                I'm on the same situation. Since i'm on a test environment i have 2 nodes with 8 processors each and 64GB RAM per node.

                                So my ideal would be to spawn 8 threads per node over the 2 nodes so all 16 processors are used.

                                I'm using SGE too. And i've MPI installed on my test cluster.
                                I would like to know how did a SGE user achieved to split up this process with MPI config inside a SGE job file or command (qsub).

                                That would be awesome, since now i can only process bwa aln and samse over 1 node with 8 threads.

                                Thanks !

                                Comment

                                Latest Articles

                                Collapse

                                • seqadmin
                                  Strategies for Sequencing Challenging Samples
                                  by seqadmin


                                  Despite advancements in sequencing platforms and related sample preparation technologies, certain sample types continue to present significant challenges that can compromise sequencing results. Pedro Echave, Senior Manager of the Global Business Segment at Revvity, explained that the success of a sequencing experiment ultimately depends on the amount and integrity of the nucleic acid template (RNA or DNA) obtained from a sample. “The better the quality of the nucleic acid isolated...
                                  03-22-2024, 06:39 AM
                                • seqadmin
                                  Techniques and Challenges in Conservation Genomics
                                  by seqadmin



                                  The field of conservation genomics centers on applying genomics technologies in support of conservation efforts and the preservation of biodiversity. This article features interviews with two researchers who showcase their innovative work and highlight the current state and future of conservation genomics.

                                  Avian Conservation
                                  Matthew DeSaix, a recent doctoral graduate from Kristen Ruegg’s lab at The University of Colorado, shared that most of his research...
                                  03-08-2024, 10:41 AM

                                ad_right_rmr

                                Collapse

                                News

                                Collapse

                                Topics Statistics Last Post
                                Started by seqadmin, Yesterday, 06:37 PM
                                0 responses
                                11 views
                                0 likes
                                Last Post seqadmin  
                                Started by seqadmin, Yesterday, 06:07 PM
                                0 responses
                                10 views
                                0 likes
                                Last Post seqadmin  
                                Started by seqadmin, 03-22-2024, 10:03 AM
                                0 responses
                                51 views
                                0 likes
                                Last Post seqadmin  
                                Started by seqadmin, 03-21-2024, 07:32 AM
                                0 responses
                                68 views
                                0 likes
                                Last Post seqadmin  
                                Working...
                                X