I'm trying to use a computer cluster for the first time, SHARCNET using the orca cluster, and I'm getting errors when trying to run Bowtie with 8 threads. 2 and 4 threads on the cluster run fine and I can run with 8 threads locally in Windows without any problems. Here are the Bowtie errors from an 8 thread run on the cluster, after which it exits:
Here are the errors I get from the server:
Locally I'm using --chunkmbs 512, Bowtie uses about 5 GB of RAM, and it starts outputting alignments. On the server I've tried up to --chunkmbs 16384 and specifying up to 24GB of RAM for the job and not even a single alignment is output before Bowtie exits with errors. I have 60million 100bp single end reads. The other Bowtie options in all cases are: -n 3 -l 75 --best --strata -m 1.
[Edit] After playing around with some more options it might be working now. I left --chunkmbs lower (1024) but requested more RAM (24GB) and it's running with no errors so far. Though if anyone can explain what's going on with memory usage here that'd be appreciated
Code:
Time loading forward index: 00:00:50 Time loading mirror index: 00:00:31 Warning: Exhausted best-first chunk memory for read HWI-ST724:196:C0LCGACXX:6:1101:1401:1999 1:N:0:ACAGTG (patid 5); skipping read Warning: Exhausted best-first chunk memory for read HWI-ST724:196:C0LCGACXX:6:1101:1350:1984 1:N:0:ACAGTG (patid 3); skipping read Warning: Exhausted best-first chunk memory for read HWI-ST724:196:C0LCGACXX:6:1101:1378:1995 1:N:0:ACAGTG (patid 4); skipping read Warning: Exhausted best-first chunk memory for read HWI-ST724:196:C0LCGACXX:6:1101:1266:1963 1:N:0:ACAGTG (patid 2); skipping read Warning: Exhausted best-first chunk memory for read HWI-ST724:196:C0LCGACXX:6:1101:1718:1950 1:N:0:ACAGTG (patid 6); skipping read Warning: Exhausted best-first chunk memory for read HWI-ST724:196:C0LCGACXX:6:1101:1521:1954 1:N:0:ACAGTG (patid 7); skipping read Warning: Exhausted best-first chunk memory for read HWI-ST724:196:C0LCGACXX:6:1101:1440:1961 1:N:0:ACAGTG (patid 1); skipping read Warning: Exhausted best-first chunk memory for read HWI-ST724:196:C0LCGACXX:6:1101:1350:1929 1:N:0:ACAGTG (patid 0); skipping read /var/spool/torque/mom_priv/jobs/3875588.orc-admin2.orca.sharcnet.SC: line 3: 3411 Segmentation fault /work/user/bowtie/bowtie -t -p 8 -n 3 -l 75 --best --strata -m 1 --chunkmbs 8192 --suppress "1,6,7,8" RepeatMaskerClass /work/user/sequences/h3_p05_ko_combined.fa ./h3_p05_ko8c_repeatmasker.map
Code:
--- SharcNET Job Epilogue --- job id: 3875588 exit status: 139 cpu time: 44s / 64.0h (0 %) elapsed time: 82s / 8.0h (0 %) virtual memory: 2.7G / 20.0G (13 %) WARNING: Job died due to SIGSEGV - Invalid memory reference WARNING: Job only used 0% of its requested walltime. WARNING: Job only used 0% of its requested cpu time. WARNING: Job only used 13% of its requested memory.
[Edit] After playing around with some more options it might be working now. I left --chunkmbs lower (1024) but requested more RAM (24GB) and it's running with no errors so far. Though if anyone can explain what's going on with memory usage here that'd be appreciated
Comment