SEQanswers

Go Back   SEQanswers > Bioinformatics > Bioinformatics



Similar Threads
Thread Thread Starter Forum Replies Last Post
Introducing the Trimmomatic tonybolger Bioinformatics 189 08-16-2018 11:22 AM
introducing BAMseek, a large file viewer for BAM and SAM BAMseek Bioinformatics 11 07-23-2013 09:02 PM
Introducing Savant: Genome Browser for HTS Datasets mfiume Bioinformatics 5 08-08-2011 01:52 PM
Introducing DARIO, a web service for the analysis of small RNA-seq data mfasold RNA Sequencing 0 06-27-2011 02:49 AM
Introducing our Ion Torrent! nickloman Ion Torrent 34 05-26-2011 06:56 PM

Reply
 
Thread Tools
Old 05-06-2011, 05:08 AM   #21
YEG
Junior Member
 
Location: Bethesda, MD

Join Date: Apr 2008
Posts: 2
Default

Quote:
Originally Posted by dp05yk View Post
I should probably just have showed you:

./pBWA samse -f out ~/hg18/hg18 a1 all.fq 29424134
This may be a small bug. I had to rename *.sai files for the above command to work. The files need to have an extra '-'. So [prefix]-0.sai needs to be named [prefix]--0.sai and so on for every file made with pBWA align.

Here's the pBWA align command I used :

Code:
mpirun -np 3 -hostfile hostfile pBWA aln -f a1 -t 24 ~/hg18/hg18 all.fq 29424134
YEG is offline   Reply With Quote
Old 05-06-2011, 05:43 AM   #22
dp05yk
Member
 
Location: Brock University

Join Date: Dec 2010
Posts: 66
Default

Quote:
Originally Posted by YEG View Post
This may be a small bug. I had to rename *.sai files for the above command to work. The files need to have an extra '-'. So [prefix]-0.sai needs to be named [prefix]--0.sai and so on for every file made with pBWA align.

Here's the pBWA align command I used :

Code:
mpirun -np 3 -hostfile hostfile pBWA aln -f a1 -t 24 ~/hg18/hg18 all.fq 29424134
That's... really strange. I just checked the code (for both revisions 21 and 30) and it seems like it should be functioning properly... both bwase and bwape take the entered prefix and concatenate "-%d.sai", where %d = processor rank.
dp05yk is offline   Reply With Quote
Old 05-06-2011, 06:06 AM   #23
dp05yk
Member
 
Location: Brock University

Join Date: Dec 2010
Posts: 66
Default

Actually YEG, I did find a bug. Thanks for pointing this out to me. It was assigning the processor rank AFTER determining the filename. I guess every system behaves differently, so yours was assigning a rank of -1, hence the additional dash.

I hadn't caught this because I did most if not all of my testing with the sampe command as it seemed to be more popular.

I'll be uploading the latest revision to the sourceforge page today, thanks for the input!
dp05yk is offline   Reply With Quote
Old 07-05-2011, 06:56 AM   #24
dp05yk
Member
 
Location: Brock University

Join Date: Dec 2010
Posts: 66
Default

Just to let everyone know, an alternate version of pBWA is now available that cleans up the workflow a bit. The user is no longer required to enter the number of reads in the FASTQ file, and SAM information is output to one file in parallel by all processors. There are also a few minor stability enhancements that should make pBWA compatible with MPICH. Performance appears to be similar to pBWA-r32. Thanks go to Rob Egan for the enhancements.

It's available at http://sourceforge.net/projects/pbwa ... thanks!

Last edited by dp05yk; 07-05-2011 at 07:12 AM.
dp05yk is offline   Reply With Quote
Old 08-22-2011, 08:57 PM   #25
sheng
Junior Member
 
Location: NYC

Join Date: Apr 2011
Posts: 5
Default

Hi dp05yk,

Thanks for releasing the pBWA! The discussion is very helpful for the usage of pBWA. However, I found problems installing pBWA and I could not find any README file in the source code directory. Would you please help me with the following error message I got when trying to compile it? I read the home page of PBWA and know about the requirement for MPI-"pBWA requires a multi-node (or multi-core) *nix system with a parallel scheduler alongside the OpenMPI C library in order to compile and run. " But I am not sure how to add the multi-node (or multi-core) *nix system with a parallel scheduler alongside the OpenMPI C library to compile it.

Thanks a lot!


make
#################Error################
make[1]: Entering directory `/panda_scratch_homes001/shl2018/software/alignment/pBWA'
make[1]: Nothing to be done for `lib'.
make[1]: Leaving directory `/panda_scratch_homes001/shl2018/software/alignment/pBWA'
make[1]: Entering directory `/panda_scratch_homes001/shl2018/software/alignment/pBWA/bwt_gen'
mpicc -c -g -Wall -m64 -O2 -DHAVE_PTHREAD -D_LARGEFILE64_SOURCE bwt_gen.c -o bwt_gen.o
make[1]: mpicc: Command not found
make[1]: *** [bwt_gen.o] Error 127
make[1]: Leaving directory `/panda_scratch_homes001/shl2018/software/alignment/pBWA/bwt_gen'
make: *** [lib-recur] Error 1
############### Error ###################

Quote:
Originally Posted by dp05yk View Post
Just to let everyone know, an alternate version of pBWA is now available that cleans up the workflow a bit. The user is no longer required to enter the number of reads in the FASTQ file, and SAM information is output to one file in parallel by all processors. There are also a few minor stability enhancements that should make pBWA compatible with MPICH. Performance appears to be similar to pBWA-r32. Thanks go to Rob Egan for the enhancements.

It's available at http://sourceforge.net/projects/pbwa ... thanks!
sheng is offline   Reply With Quote
Old 08-23-2011, 04:22 AM   #26
dp05yk
Member
 
Location: Brock University

Join Date: Dec 2010
Posts: 66
Default

Hi sheng,

These requirements can be broken down as follows. pBWA is a _parallel_ implementation of BWA. This means that unless your computer system has multiple processors, this software will be of no use to you. Essentially what pBWA does is distribute massive input reads files over multiple processors in order to execute BWA in parallel. If you do not have access to a computer cluster or parallel machine, this is impossible for you since you do not have multiple processors to distribute over If you have a standard home computer with a multi-_core_ processor, just use the multithreading option available in the latest release of BWA.

As for the MPICC compiler - if you in fact do have access to a computing cluster, you'll need to ask one of the administrators if the MPI compiler is installed (MPICH or OpenMPI work, actually). If it is installed, it could have an alias over than "mpicc", at which point you'll have to modify the makefile accordingly.

I hope this clears some issues up for you! I have a suspicion you may have been trying to install this on your home or basic lab PC, in which case you will be better off using BWA.

Thanks for posting!
dp05yk is offline   Reply With Quote
Old 08-23-2011, 06:31 AM   #27
sheng
Junior Member
 
Location: NYC

Join Date: Apr 2011
Posts: 5
Smile pBWA installation

Hi dp05yk,

Thanks a lot for your reply! I am working on a cluster which have multiple node and core. I am sure we have Openmpi installed in the cluster. So what is the information about openmpi that I need to change the makefile and which part of makefile do I need to change? When I compile it, I just type make? Any other steps?

Cheers,
Sheng

Quote:
Originally Posted by dp05yk View Post
Hi sheng,

These requirements can be broken down as follows. pBWA is a _parallel_ implementation of BWA. This means that unless your computer system has multiple processors, this software will be of no use to you. Essentially what pBWA does is distribute massive input reads files over multiple processors in order to execute BWA in parallel. If you do not have access to a computer cluster or parallel machine, this is impossible for you since you do not have multiple processors to distribute over If you have a standard home computer with a multi-_core_ processor, just use the multithreading option available in the latest release of BWA.

As for the MPICC compiler - if you in fact do have access to a computing cluster, you'll need to ask one of the administrators if the MPI compiler is installed (MPICH or OpenMPI work, actually). If it is installed, it could have an alias over than "mpicc", at which point you'll have to modify the makefile accordingly.

I hope this clears some issues up for you! I have a suspicion you may have been trying to install this on your home or basic lab PC, in which case you will be better off using BWA.

Thanks for posting!
sheng is offline   Reply With Quote
Old 08-23-2011, 06:36 AM   #28
dp05yk
Member
 
Location: Brock University

Join Date: Dec 2010
Posts: 66
Default

Hi Sheng,

You need to figure out the alias to use to call the MPI compiler. On most clusters this will be "mpicc"... you'll have to contact your system administrator to figure out what this is, or perform a google search for more popular aliases.

Then, in both makefiles (one in the root folder and one in the bwt_gen folder), change
CC = mpicc
to
CC = youralias

Where youralias = the alias used to call your MPI compiler.
dp05yk is offline   Reply With Quote
Old 09-14-2011, 02:40 PM   #29
ichorny
Junior Member
 
Location: San Diego

Join Date: Sep 2011
Posts: 7
Default pBWA and fastq.gz

I notice that when I run mpirun and gzipped fastq files it returns a sam file containing only the header. If I run without mpirun it works just fine.

BTW I am using v2.

Thanks,

Ilya

Last edited by ichorny; 09-14-2011 at 02:52 PM.
ichorny is offline   Reply With Quote
Old 09-14-2011, 03:21 PM   #30
dp05yk
Member
 
Location: Brock University

Join Date: Dec 2010
Posts: 66
Default

That's interesting... as the website for pBWA notes, gzipped FASTQ files are not supported since we required random file access to split up the input files.
dp05yk is offline   Reply With Quote
Old 09-15-2011, 10:43 AM   #31
ichorny
Junior Member
 
Location: San Diego

Join Date: Sep 2011
Posts: 7
Default Seq Fault when running on 12 or more processors

I am getting a seg fault when running on 12,24, or 32 processors. This is output from a 32 processor job. I am using fastqs that are about 250GB each.
Thoughts? It works for 4,8, and 6 processors, so far (job yet to finish). I did not test 10.

[compute-2-1:17105] *** Process received signal ***
[compute-2-1:17105] Signal: Segmentation fault (11)
[compute-2-1:17105] Signal code: Address not mapped (1)
[compute-2-1:17105] Failing at address: (nil)
[compute-2-1:17105] [ 0] /lib64/libpthread.so.0 [0x331e40eb10]
[compute-2-1:17105] [ 1] /lib64/libc.so.6(memcpy+0x15b) [0x331d87c24b]
[compute-2-1:17105] [ 2] /home/galaxy/production/Sept06/galaxy-central/tool-deps/mpirun/1.4.3/lib/libmpi.so.0(ompi_convertor_unpack+0xae) [0x2b6a6db846ae]
[compute-2-1:17105] [ 3] /home/galaxy/production/Sept06/galaxy-central/tool-deps/mpirun/1.4.3/lib/libmpi.so.0 [0x2b6a6dc1fc6e]
[compute-2-1:17105] [ 4] /home/galaxy/production/Sept06/galaxy-central/tool-deps/mpirun/1.4.3/lib/libmpi.so.0 [0x2b6a6dc1cc56]
[compute-2-1:17105] [ 5] /home/galaxy/production/Sept06/galaxy-central/tool-deps/mpirun/1.4.3/lib/libmpi.so.0 [0x2b6a6dbb6e38]
[compute-2-1:17105] [ 6] /home/galaxy/production/Sept06/galaxy-central/tool-deps/mpirun/1.4.3/lib/libopen-pal.so.0(opal_progress+0x5a) [0x2b6a6e1a04ea]
[compute-2-1:17105] [ 7] /home/galaxy/production/Sept06/galaxy-central/tool-deps/mpirun/1.4.3/lib/libmpi.so.0 [0x2b6a6db77135]
[compute-2-1:17105] [ 8] /home/galaxy/production/Sept06/galaxy-central/tool-deps/mpirun/1.4.3/lib/libmpi.so.0 [0x2b6a6dbc5086]
[compute-2-1:17105] [ 9] /home/galaxy/production/Sept06/galaxy-central/tool-deps/mpirun/1.4.3/lib/libmpi.so.0 [0x2b6a6dbc5737]
[compute-2-1:17105] [10] /home/galaxy/production/Sept06/galaxy-central/tool-deps/mpirun/1.4.3/lib/libmpi.so.0 [0x2b6a6dbbb3d0]
[compute-2-1:17105] [11] /home/galaxy/production/Sept06/galaxy-central/tool-deps/mpirun/1.4.3/lib/libmpi.so.0 [0x2b6a6dbcd3c9]
[compute-2-1:17105] [12] /home/galaxy/production/Sept06/galaxy-central/tool-deps/mpirun/1.4.3/lib/libmpi.so.0(MPI_Bcast+0x171) [0x2b6a6db8be11]
[compute-2-1:17105] [13] pBWA(bwt_restore_bwt+0x7c) [0x407fbc]
[compute-2-1:17105] [14] pBWA(bwa_aln_core+0x81) [0x408b01]
[compute-2-1:17105] [15] pBWA(bwa_aln+0x196) [0x409056]
[compute-2-1:17105] [16] pBWA(main+0xec) [0x4281ac]
[compute-2-1:17105] [17] /lib64/libc.so.6(__libc_start_main+0xf4) [0x331d81d994]
[compute-2-1:17105] [18] pBWA [0x404b79]
[compute-2-1:17105] *** End of error message ***
--------------------------------------------------------------------------
ichorny is offline   Reply With Quote
Old 09-15-2011, 10:46 AM   #32
dp05yk
Member
 
Location: Brock University

Join Date: Dec 2010
Posts: 66
Default

Hm... could you answer a couple of questions for me?

1. What is your system's nodal information (# nodes, # cores per node, RAM per node).

2. How are you splitting the jobs up (ie. how many processes are you trying to put on each node)?
dp05yk is offline   Reply With Quote
Old 09-15-2011, 10:59 AM   #33
ichorny
Junior Member
 
Location: San Diego

Join Date: Sep 2011
Posts: 7
Default

Quote:
Originally Posted by dp05yk View Post
Hm... could you answer a couple of questions for me?

1. What is your system's nodal information (# nodes, # cores per node, RAM per node).

2. How are you splitting the jobs up (ie. how many processes are you trying to put on each node)?
3 nodes, 24 hyperthreaded processors per node (12 actual). About 50G of ram per node.

I am just submitting via a qsub to SGE and letting SGE decide on the distribution. With 32 preocessors I think I got 24 on one and 8 on the other. With 24 I got 23 on one and 1 on the other.

Is it a RAM issue?
ichorny is offline   Reply With Quote
Old 09-15-2011, 11:05 AM   #34
dp05yk
Member
 
Location: Brock University

Join Date: Dec 2010
Posts: 66
Default

Quote:
Originally Posted by ichorny View Post
3 nodes, 24 hyperthreaded processors per node (12 actual). About 50G of ram per node.

I am just submitting via a qsub to SGE and letting SGE decide on the distribution. With 32 preocessors I think I got 24 on one and 8 on the other. With 24 I got 23 on one and 1 on the other.

Is it a RAM issue?
It could possibly be a RAM issue... with MPI applications each instance of the program is completely separate from another. Ie. where threaded applications share global variables, MPI applications do not. So if pBWA requires x GB/RAM for 1 processor, it will require p*x GB/RAM for p processors... if you only have 50GB RAM/node and you're running 24 processes on said node, you're only allowing ~2.1GB RAM per process... that's cutting it mighty-fine.

What you may want to try is combining multithreading and pBWA... use 24 processors (again), but tell the system to put 8 on each of your 3 nodes... then in your /pBWA aln command, use -n 3 to spawn 3 threads per node so you'll use all 72 of your cores... tell me how that works.
dp05yk is offline   Reply With Quote
Old 09-15-2011, 02:57 PM   #35
rskr
Senior Member
 
Location: Santa Fe, NM

Join Date: Oct 2010
Posts: 250
Default

I don't see the point of using mpi/parallelization for a process that is embarrassingly parallel. BWA has a great multi-threaded functionality and with samtools merging functionalitie is easy. So it is very easy to chunk the reads and run them multi-threaded then merge the bam files.
rskr is offline   Reply With Quote
Old 09-15-2011, 03:08 PM   #36
ichorny
Junior Member
 
Location: San Diego

Join Date: Sep 2011
Posts: 7
Default

Quote:
Originally Posted by dp05yk View Post
It could possibly be a RAM issue... with MPI applications each instance of the program is completely separate from another. Ie. where threaded applications share global variables, MPI applications do not. So if pBWA requires x GB/RAM for 1 processor, it will require p*x GB/RAM for p processors... if you only have 50GB RAM/node and you're running 24 processes on said node, you're only allowing ~2.1GB RAM per process... that's cutting it mighty-fine.

What you may want to try is combining multithreading and pBWA... use 24 processors (again), but tell the system to put 8 on each of your 3 nodes... then in your /pBWA aln command, use -n 3 to spawn 3 threads per node so you'll use all 72 of your cores... tell me how that works.
does the samse/sampe support multi threading. There does not seem to be an option listed?
ichorny is offline   Reply With Quote
Old 09-15-2011, 03:11 PM   #37
dp05yk
Member
 
Location: Brock University

Join Date: Dec 2010
Posts: 66
Default

Quote:
Originally Posted by rskr View Post
I don't see the point of using mpi/parallelization for a process that is embarrassingly parallel. BWA has a great multi-threaded functionality and with samtools merging functionalitie is easy. So it is very easy to chunk the reads and run them multi-threaded then merge the bam files.
Actually, BWA only has multi-threaded functionality for half of the process. sampe/samse is not multithreaded. Moreover, BWA's multithreading was inefficient when I initially released pBWA for anything more than ~8 threads (google it... there are multiple threads denoting this issue prior to the update). Upon release of pBWA, running pBWA with 24 processors was faster _just for aln_ than BWA was with 24 threads. Obviously pBWA was faster for sampe/samse. FYI - it was my edits that improved multithreading efficiency in BWA.

You're right - this isn't an enormous breakthrough but it has its advantages. On the cluster I use it's much easier to get a large MPI job scheduled than hundreds of serial jobs... further increasing the usefulness of pBWA.
dp05yk is offline   Reply With Quote
Old 09-15-2011, 03:12 PM   #38
dp05yk
Member
 
Location: Brock University

Join Date: Dec 2010
Posts: 66
Default

Quote:
Originally Posted by ichorny View Post
does the samse/sampe support multi threading. There does not seem to be an option listed?
Unfortunately not. This is why running pBWA is all about finding the right balance between multithreading and parallelism. If you ran (as I suggested) 24 processes across 3 cores each with 3 threads, you'll need to just run sampe/samse with 24 processes, no threads. That should work just fine.
dp05yk is offline   Reply With Quote
Old 09-15-2011, 08:41 PM   #39
ichorny
Junior Member
 
Location: San Diego

Join Date: Sep 2011
Posts: 7
Default

Now I am getting a seq fault in the sam pe part. This is using 8/12 cores on a machine with 50GB of memory. Thoughts?

Proc 1: [bwa_seq_open] seeked to 31248 in /home/galaxy/production/Sept06/galaxy-central/database/files/003/dataset_3168.dat
Proc 4: [bwa_seq_open] seeked to 124248 in /home/galaxy/production/Sept06/galaxy-central/database/files/003/dataset_3168.dat
Proc 3: [bwa_seq_open] seeked to 93248 in /home/galaxy/production/Sept06/galaxy-central/database/files/003/dataset_3168.dat
Proc 6: [bwa_seq_open] seeked to 186248 in /home/galaxy/production/Sept06/galaxy-central/database/files/003/dataset_3168.dat
Proc 3: [bwa_seq_open] seeked to 93248 in /home/galaxy/production/Sept06/galaxy-central/database/files/003/dataset_3169.dat
Proc 7: [bwa_seq_open] seeked to 217248 in /home/galaxy/production/Sept06/galaxy-central/database/files/003/dataset_3168.dat
Proc 7: [bwa_seq_open] seeked to 217248 in /home/galaxy/production/Sept06/galaxy-central/database/files/003/dataset_3169.dat
Proc 7: [skipToNextPairedRecord] found SDUS-BRUNO-106:1:0:4:21:5226:2182 SDUS-BRUNO-106:1:0:4:21:5226:2182
Proc 6: [bwa_seq_open] seeked to 186248 in /home/galaxy/production/Sept06/galaxy-central/database/files/003/dataset_3169.dat
Proc 6: [skipToNextPairedRecord] found SDUS-BRUNO-106:1:0:4:21:4608:2141 SDUS-BRUNO-106:1:0:4:21:4608:2141
Proc 7: [bwa_sai2sam_pe_core] 124 reads
Proc 7: [bwa_sai2sam_pe_core] convert to sequence coordinate...
Proc 6: [bwa_sai2sam_pe_core] 125 reads
Proc 6: [bwa_sai2sam_pe_core] convert to sequence coordinate...
Proc 0: [bwa_seq_open] seeked to 0 in /home/galaxy/production/Sept06/galaxy-central/database/files/003/dataset_3168.dat
Proc 0: [bwa_seq_open] seeked to 0 in /home/galaxy/production/Sept06/galaxy-central/database/files/003/dataset_3169.dat
Proc 2: [bwa_seq_open] seeked to 62248 in /home/galaxy/production/Sept06/galaxy-central/database/files/003/dataset_3168.dat
Proc 1: [bwa_seq_open] seeked to 31248 in /home/galaxy/production/Sept06/galaxy-central/database/files/003/dataset_3169.dat
Proc 2: [bwa_seq_open] seeked to 62248 in /home/galaxy/production/Sept06/galaxy-central/database/files/003/dataset_3169.dat
Proc 1: [skipToNextPairedRecord] found SDUS-BRUNO-106:1:0:4:21:1817:2166 SDUS-BRUNO-106:1:0:4:21:1817:2166
Proc 2: [skipToNextPairedRecord] found SDUS-BRUNO-106:1:0:4:21:2392:2241 SDUS-BRUNO-106:1:0:4:21:2392:2241
Proc 5: [bwa_seq_open] seeked to 155248 in /home/galaxy/production/Sept06/galaxy-central/database/files/003/dataset_3168.dat
Proc 5: [bwa_seq_open] seeked to 155248 in /home/galaxy/production/Sept06/galaxy-central/database/files/003/dataset_3169.dat
Proc 5: [skipToNextPairedRecord] found SDUS-BRUNO-106:1:0:4:21:4089:2188 SDUS-BRUNO-106:1:0:4:21:4089:2188
Proc 4: [bwa_seq_open] seeked to 124248 in /home/galaxy/production/Sept06/galaxy-central/database/files/003/dataset_3169.dat
Proc 3: [skipToNextPairedRecord] found SDUS-BRUNO-106:1:0:4:21:2841:2141 SDUS-BRUNO-106:1:0:4:21:2841:2141
Proc 4: [skipToNextPairedRecord] found SDUS-BRUNO-106:1:0:4:21:3574:2130 SDUS-BRUNO-106:1:0:4:21:3574:2130
Proc 3: [bwa_sai2sam_pe_core] 125 reads
Proc 3: [bwa_sai2sam_pe_core] convert to sequence coordinate...
Proc 4: [bwa_sai2sam_pe_core] 125 reads
Proc 4: [bwa_sai2sam_pe_core] convert to sequence coordinate...
Proc 1: [bwa_sai2sam_pe_core] 125 reads
Proc 1: [bwa_sai2sam_pe_core] convert to sequence coordinate...
Proc 2: [bwa_sai2sam_pe_core] 125 reads
Proc 2: [bwa_sai2sam_pe_core] convert to sequence coordinate...
Proc 5: [bwa_sai2sam_pe_core] 125 reads
Proc 5: [bwa_sai2sam_pe_core] convert to sequence coordinate...
Proc 0: [bwa_sai2sam_pe_core] 126 reads
Proc 0: [bwa_sai2sam_pe_core] convert to sequence coordinate...
Broadcasting BWT (this may take a while)... done!
Broadcasting SA... done!
Broadcasting BWT (this may take a while)... [compute-2-0:15546] *** Process received signal ***
[compute-2-0:15546] Signal: Segmentation fault (11)
[compute-2-0:15546] Signal code: Address not mapped (1)
[compute-2-0:15546] Failing at address: (nil)
[compute-2-0:15546] [ 0] /lib64/libpthread.so.0 [0x3b43e0eb10]
[compute-2-0:15546] [ 1] /lib64/libc.so.6(memcpy+0x15b) [0x3b4327c24b]
[compute-2-0:15546] [ 2] /home/galaxy/production/Sept06/galaxy-central/tool-deps/mpirun/1.4.3/lib/libmpi.so.0(ompi_convertor_unpack+0xae) [0x2b904780c6ae]
[compute-2-0:15546] [ 3] /home/galaxy/production/Sept06/galaxy-central/tool-deps/mpirun/1.4.3/lib/libmpi.so.0 [0x2b90478a7c6e]
[compute-2-0:15546] [ 4] /home/galaxy/production/Sept06/galaxy-central/tool-deps/mpirun/1.4.3/lib/libmpi.so.0 [0x2b90478a4c56]
[compute-2-0:15546] [ 5] /home/galaxy/production/Sept06/galaxy-central/tool-deps/mpirun/1.4.3/lib/libmpi.so.0 [0x2b904783ee38]
[compute-2-0:15546] [ 6] /home/galaxy/production/Sept06/galaxy-central/tool-deps/mpirun/1.4.3/lib/libopen-pal.so.0(opal_progress+0x5a) [0x2b9047e284ea]
[compute-2-0:15546] [ 7] /home/galaxy/production/Sept06/galaxy-central/tool-deps/mpirun/1.4.3/lib/libmpi.so.0 [0x2b90477ff135]
[compute-2-0:15546] [ 8] /home/galaxy/production/Sept06/galaxy-central/tool-deps/mpirun/1.4.3/lib/libmpi.so.0 [0x2b904784d086]
[compute-2-0:15546] [ 9] /home/galaxy/production/Sept06/galaxy-central/tool-deps/mpirun/1.4.3/lib/libmpi.so.0 [0x2b904784d737]
[compute-2-0:15546] [10] /home/galaxy/production/Sept06/galaxy-central/tool-deps/mpirun/1.4.3/lib/libmpi.so.0 [0x2b90478433d0]
[compute-2-0:15546] [11] /home/galaxy/production/Sept06/galaxy-central/tool-deps/mpirun/1.4.3/lib/libmpi.so.0 [0x2b90478553c9]
[compute-2-0:15546] [12] /home/galaxy/production/Sept06/galaxy-central/tool-deps/mpirun/1.4.3/lib/libmpi.so.0(MPI_Bcast+0x171) [0x2b9047813e11]
[compute-2-0:15546] [13] pBWA(bwt_restore_bwt+0x7c) [0x407fbc]
[compute-2-0:15546] [14] pBWA(bwa_cal_pac_pos_pe+0x1b8b) [0x41ad4b]
[compute-2-0:15546] [15] pBWA(bwa_sai2sam_pe_core+0x3af) [0x41b1bf]
[compute-2-0:15546] [16] pBWA(bwa_sai2sam_pe+0x415) [0x41be45]
[compute-2-0:15546] [17] pBWA(main+0x96) [0x428156]
[compute-2-0:15546] [18] /lib64/libc.so.6(__libc_start_main+0xf4) [0x3b4321d994]
[compute-2-0:15546] [19] pBWA [0x404b79]
[compute-2-0:15546] *** End of error message ***
--------------------------------------------------------------------------
mpirun noticed that process rank 3 with PID 15546 on node compute-2-0.local exited on signal 11 (Segmentation fault).
ichorny is offline   Reply With Quote
Old 09-16-2011, 04:54 AM   #40
dp05yk
Member
 
Location: Brock University

Join Date: Dec 2010
Posts: 66
Default

Given it's the same error message in the same function, I'm going to say RAM again - sampe/samse require more RAM than aln, because sampe/samse require every processor to have the entire suffix array (hence the 'broadcasting SA') as well as the BWT.

Just play around with different parallel/threaded combinations... eventually you will find the optimal combination for your system.

EDIT: just realized you were only using 8 processors... perhaps there are other users utilizing RAM on your cluster?

Last edited by dp05yk; 09-16-2011 at 05:18 AM.
dp05yk is offline   Reply With Quote
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off




All times are GMT -8. The time now is 02:03 AM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2019, vBulletin Solutions, Inc.
Single Sign On provided by vBSSO