![]() |
|
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
Tophat: segment-based junction search failed with err =-11 | jdanderson | Bioinformatics | 15 | 10-14-2017 07:47 AM |
Tophat Error: Error: segment-based junction search failed with err =-6 | sjnewhouse | RNA Sequencing | 8 | 03-19-2013 05:14 AM |
yet another "Error: segment-based junction search failed with err = -9" | liux | Bioinformatics | 1 | 08-24-2010 10:48 AM |
Error: segment-based junction search failed with err = -9 | Albert Cheng | Bioinformatics | 1 | 08-19-2010 09:24 AM |
Tophat Segment-based junction error = -9 | UNCKidney | Bioinformatics | 4 | 04-08-2010 08:29 AM |
![]() |
|
Thread Tools |
![]() |
#1 |
Junior Member
Location: Canberra, Australia Join Date: Oct 2011
Posts: 3
|
![]()
Hi
Lately, I have a problem in running tophat 1.3.1 with a 100bp paired-end Illumina HiSeq RNA reads. After cleaning (quality trim, duplicate removal, adapter removal) I did split the files (taking care not to split the last entry sequence and quality scores) and fed to tophat . Please note, here I have more of left-kept reads because I have an extra file with leftover unpaired reads. Also, I have noticed with previous successful runs, eventhough the fed fastq paired read files have the same number of sequences what we see (in the log) as left-reads and right reads are slightly different. Here is the log: [Thu Oct 27 18:33:40 2011] Beginning TopHat run (v1.3.1) ----------------------------------------------- [Thu Oct 27 18:33:40 2011] Preparing output location ./tophat_out/ [Thu Oct 27 18:33:40 2011] Checking for Bowtie index files [Thu Oct 27 18:33:40 2011] Checking for reference FASTA file [Thu Oct 27 18:33:40 2011] Checking for Bowtie Bowtie version: 0.12.7.0 [Thu Oct 27 18:33:40 2011] Checking for Samtools Samtools Version: 0.1.12a [Thu Oct 27 18:33:40 2011] Generating SAM header for ../PG210SC5 [Thu Oct 27 18:33:40 2011] Preparing reads format: fastq quality scale: phred33 (default) Left reads: min. length=50, count=134790672 Right reads: min. length=50, count=118121205 [Thu Oct 27 20:34:22 2011] Mapping left_kept_reads against PG210SC5 with Bowtie [Thu Oct 27 21:42:15 2011] Processing bowtie hits [Thu Oct 27 23:08:30 2011] Mapping left_kept_reads_seg1 against PG210SC5 with Bowtie (1/4) [Fri Oct 28 00:27:19 2011] Mapping left_kept_reads_seg2 against PG210SC5 with Bowtie (2/4) [Fri Oct 28 01:47:04 2011] Mapping left_kept_reads_seg3 against PG210SC5 with Bowtie (3/4) [Fri Oct 28 02:57:47 2011] Mapping left_kept_reads_seg4 against PG210SC5 with Bowtie (4/4) [Fri Oct 28 04:25:49 2011] Mapping right_kept_reads against PG210SC5 with Bowtie [Fri Oct 28 05:26:52 2011] Processing bowtie hits [Fri Oct 28 06:48:08 2011] Mapping right_kept_reads_seg1 against PG210SC5 with Bowtie (1/4) [Fri Oct 28 08:00:12 2011] Mapping right_kept_reads_seg2 against PG210SC5 with Bowtie (2/4) [Fri Oct 28 09:11:43 2011] Mapping right_kept_reads_seg3 against PG210SC5 with Bowtie (3/4) [Fri Oct 28 10:21:22 2011] Mapping right_kept_reads_seg4 against PG210SC5 with Bowtie (4/4) [Fri Oct 28 11:56:21 2011] Searching for junctions via segment mapping [FAILED] Error: segment-based junction search failed with err =1 ____________________________________________________________________________________________ In the segment_juncs.log the last entry reads: FZStream::rewind() popen(gzip -cd './tophat_out/tmp/left_kept_reads_seg1_missing.fq.z') failed I have previously used such mixture of paired and unpaired reads successfully (I think!) with another set of reads. However, they were smaller read sets. Even with the above when I use only one pair out of four split files it works fine. Appreciate if anyone can help me to resolve this problem. |
![]() |
![]() |
![]() |
#2 |
Junior Member
Location: CT, USA Join Date: Dec 2011
Posts: 1
|
![]()
I have the same problem. Were you able to figure out the reason for this error?
-canbruce |
![]() |
![]() |
![]() |
#3 |
Junior Member
Location: Canberra, Australia Join Date: Oct 2011
Posts: 3
|
![]()
Not yet. I suspect tophat is running out of memory. Although I am running it on a 48GB RAM Linux machine (Ubuntu OS) I think it is still not enough to handle such large inputs.
|
![]() |
![]() |
![]() |
#4 | |
Member
Location: Rockville Join Date: May 2009
Posts: 40
|
![]()
I had the same problem today, hope someone can stand out and point the way to fix.
My data directly output from illumina pipeline with two fastq files. Quote:
|
|
![]() |
![]() |
![]() |
#6 |
Member
Location: Rockville Join Date: May 2009
Posts: 40
|
![]()
My data is around 200M reads from Hiseq one lane and I used 16 G memory to run Tophat 1.3.3 with coverage microexon butterfly search option.
Btw It worked well with old version of tophat |
![]() |
![]() |
![]() |
#7 |
Senior Member
Location: Canada Join Date: Nov 2010
Posts: 124
|
![]()
The butterfly search option uses a lot of memory. I'm pretty sure you'll need a lot more than 16GB memory to align 200M reads using that option. I have 16GB and ran out of memory trying to align ~30M 100bp PE reads with the butterfly option.
|
![]() |
![]() |
![]() |
#8 |
Member
Location: Rockville Join Date: May 2009
Posts: 40
|
![]()
yes that is true, without them, it works well now.
|
![]() |
![]() |
![]() |
#9 |
Senior Member
Location: MDC, Berlin, Germany Join Date: Oct 2009
Posts: 317
|
![]()
Tophat has updated to version 1.4.0 (BETA). Has anyone already tried this new version? As a big change in this new version, I think the strategy that Tophat maps reads to the transcriptome given by users first would be much stabler.
__________________
Xi Wang |
![]() |
![]() |
![]() |
#10 |
Member
Location: ma Join Date: Apr 2012
Posts: 10
|
![]()
I get a similar error. However, I get a different indication (see title).
After looking at the code, I think the error has to do with threading on multiple cores and Read_ids. In the section of the code I looked at, read_ids are handled distinctly for threaded and non-threaded code (I think). Am running latest (2.0.3). Am trying again without threading. |
![]() |
![]() |
![]() |
#11 | |
Member
Location: california Join Date: Jul 2009
Posts: 24
|
![]()
Kesner, have you solved the problem by not using threading. I have the same problems for segment_juncs
Processed 4000000 root segment groupssi Error: ReadStream::getRead() called with out-of-order id#! I'm using Tophat 1.4.1 (I have the same error for 2.0.3, but it's from tophat_reports). And it should not be a memory problem because I have 96G RAM. Therefore maybe something related to threading. Quote:
|
|
![]() |
![]() |
![]() |
#12 |
Member
Location: ma Join Date: Apr 2012
Posts: 10
|
![]()
I think I get passed the problem by using single treading. Since there are many process on the machine I am using, it is possible some other resource failure was to blame.
Now my problem is that it is taking forever for the run to complete. Alignments are finished but the code does about 1 chr a day to process junctions. On the other hand, I'm not sure throwing multiple cores at this step does anything. I know my reads are contaminated with a lot of background. I figure that this is why I am having problems with the whole process in general. |
![]() |
![]() |
![]() |
#13 | |
Member
Location: california Join Date: Jul 2009
Posts: 24
|
![]()
I agreed that there should be something wrong with the resource allocation. I re-run some samples (also multi-threading), sometimes it got the same error message, sometimes I can finish it successfully. There this problem is not repeatable, and maybe very related the computer situation at running time.
Quote:
|
|
![]() |
![]() |
![]() |
#14 |
Member
Location: ma Join Date: Apr 2012
Posts: 10
|
![]()
I was wondering If you still see the problem with the latest code build of tophat2?
|
![]() |
![]() |
![]() |
#15 |
Member
Location: St. Louis, MO Join Date: Aug 2011
Posts: 53
|
![]()
I am getting the same error with tophat 2.0.0.
tophat.log: Code:
.... [2012-06-30 11:50:15] Mapping right_kept_reads.m2g_um_seg4 against mm9.fa with Bowtie2 (4/4) /usr/local/bin/tophat-2.0.0/fix_map_ordering: /lib64/libz.so.1: no version information available (required by /usr/local/bin/tophat-2.0.0/fix_map_ordering) [2012-07-01 00:20:11] Searching for junctions via segment mapping [FAILED] Error: segment-based junction search failed with err =1 Error: ReadStream::getRead() called with out-of-order id#! Code:
... Loading chrUn_random...done Loading chrX_random...done Loading chrY_random...done Loading ...done >> Performing segment-search: Loading left segment hits... Error: ReadStream::getRead() called with out-of-order id#! |
![]() |
![]() |
![]() |
#16 |
Member
Location: St. Louis, MO Join Date: Aug 2011
Posts: 53
|
![]()
I found the segment_juncs command that died in runs.log.
I tried rerunning the exact command, but with p=1 (singlethreaded), and get the same error. I've dug through the source of reads.cpp to find that apparently read access must be sequential. Does anyone know which file these reads are listed? |
![]() |
![]() |
![]() |
#17 |
Member
Location: St. Louis, MO Join Date: Aug 2011
Posts: 53
|
![]()
I'm still digging through source hoping for some light. Any insight would be strongly appreciated.
|
![]() |
![]() |
![]() |
#18 |
Member
Location: St. Louis, MO Join Date: Aug 2011
Posts: 53
|
![]()
I ran a test run over the weekend using data that had failed before. The run was successful over the weekend!
I ran with just as many threads as the box has. This contrasts with the pervious failed runs where i called for more threads than existed (2x). I'm optimistic that i'll be able to successfully rerun the other samples as well. I'll let you know if this isn't the case. |
![]() |
![]() |
![]() |
#19 |
Member
Location: CH Join Date: Jan 2011
Posts: 10
|
![]()
seems to be a really random issue... I have it in one sample only - three others have been run happily. I'm curious about the explanation
![]() |
![]() |
![]() |
![]() |
#20 |
Member
Location: St. Louis, MO Join Date: Aug 2011
Posts: 53
|
![]()
For me it was pretty systematic.
My solution seems to work thus far as I have rerun 3 samples that previously failed. I think the problem may stem from when Cufflinks' threads are competing at the scheduler with other processes. Many time people report this error when they share compute resources on a cluster. Perhaps this ordering is basically a race condition where threads can become out of sync, resulting in unexpected orders of return. Last edited by ians; 07-13-2012 at 07:49 AM. |
![]() |
![]() |
![]() |
Thread Tools | |
|
|