Hi,
I am attempting to use SSPACE version 3 to re-scaffold some mRNA contigs using paired-end RNA-Seq data. While SSPACe was designed for rescaffolding genomes, I see no reason why it should not work in similar fashion for re-scaffolding de novo assembled RNA contigs.
SSPACE is discussed in seqanswers here:
The SSPACE webpage can be reached from here:
I consistently get the following error message:
with:
It seems to me that SSPACE is running out of physical memory while trying to open the bwa output.
The "Thread 8 terminated abnormally" is especially cryptic, as the program was set to only run with 6 threads (-T 6).
My run parameters are:
The supporting library file is attached.
I previously tried to run SSPACE over multiple nodes, but it seemed to use the cores on only one of the nodes.
Does anyone know of a way to get SSPACE to run across multiple nodes? Can it be run with mpi multi-threading? If not, does anyone have any suggestions for reducing memory requirements?
I am attempting to use SSPACE version 3 to re-scaffold some mRNA contigs using paired-end RNA-Seq data. While SSPACe was designed for rescaffolding genomes, I see no reason why it should not work in similar fashion for re-scaffolding de novo assembled RNA contigs.
SSPACE is discussed in seqanswers here:
The SSPACE webpage can be reached from here:
I consistently get the following error message:
=>date: Mapping reads to contigs with Bowtie
Thread 8 terminated abnormally: Can't open bwa output -- fatal
Out of memory!
Process 'extend/format contigs' failed on date
Thread 8 terminated abnormally: Can't open bwa output -- fatal
Out of memory!
Process 'extend/format contigs' failed on date
with:
resources_used.mem=47,424,992kb
resources_used.vmem=64,093,240kb
resources_used.walltime=00:34:33
resources_used.vmem=64,093,240kb
resources_used.walltime=00:34:33
It seems to me that SSPACE is running out of physical memory while trying to open the bwa output.
The "Thread 8 terminated abnormally" is especially cryptic, as the program was set to only run with 6 threads (-T 6).
My run parameters are:
Code:
#PBS -l nodes=1:ppn=6,mem=47gb,walltime=72:00:00 SSPACE_FILE=${HOME}/src/SSPACE-STANDARD-3.0_linux-x86_64/SSPACE_Standard_v3.0.pl LIBRARY_FILE=/[I]filepath[/I]/library_file_2.txt CONTIG_FILE=/[I]filepath[/I]/A_planci_pcg_transdec_MePath2Renam_echinoHomology.fasta #-s option, contigs that we are scaffolding MIN_LINKS=10 # -k 10 THREADS=6 # -T, threads SKIP=0 #-S 0=no, -S 1=yes, skip processing of reads EXTEND_CONTIGS=1 #-x, extend contigs using sequence data, 0=no, 1=yes, default 0 VERBOSE=1 #-v Runs the scaffolding process in verbose mode (-v 1=yes, -v 0=no, default -v 0, optional) $SSPACE_FILE -l $LIBRARY_FILE -s $CONTIG_FILE -k $MIN_LINKS -T $THREADS -S $SKIP -x $EXTEND_CONTIGS -v $VERBOSE
I previously tried to run SSPACE over multiple nodes, but it seemed to use the cores on only one of the nodes.
Does anyone know of a way to get SSPACE to run across multiple nodes? Can it be run with mpi multi-threading? If not, does anyone have any suggestions for reducing memory requirements?
Comment