Hi!
I have been using GAPipeline and ELAND for about a year with no major problems - mainly for the QC of the runs.
Recently, we started to see a weird pattern in the Summary.htm generated by GERALD - in each lane, starting with a certain tile, there were 0 reads aligned to the genome in every tile after that (and the tile).
I looked at the output of the pipeline, and I didn't see any unusual output from ELAND - except an occasional error like this:
but not consistently and not for every lane - although the above pattern is consistent, in every lane.
We are running version 1.5.1 of GAPipeline, compiled on a x86_64 linux (CentOS 5.4) with 8 CPU cores and 32GB of RAM. It worked just fine before the v4 kits - and we were running 8 threads for both the pipeline (make -j 8) and ELAND (ELAND_MULTIPLE_INSTANCES 8)
The first thing that I tried was to run it with just one thread - but it made no difference.
Then, I looked at the content of the lanes and noticed that the tile where the problem starts is dependent on how many clusters (reads) there are in that lane.
For example, a lane where we had 34 million reads had the problem at tile 119, while a lane where we had about 40 million reads had the problem at tile 102 - which made me think that the read number is stored somewhere in the code on less than 36 bits (which is a weird number).
I looked back and I noticed that we never had more than 20million reads in one lane before the version 4 kits - and that's why we probably never hit this problem before, on the same hardware and pipeline software version.
To verify this hypothesis, we ran GERALD on half of the tiles (first half in one run and the second half in another run) and they were fine. The problem is that combining the output of the two GERALD runs is not trivial - we might have to merge the 2 directories and run the last stages of the pipeline again. but it's not a trivial solution.
It seems that the code needs some review for version 4 kits.
Tony, help please?
Thanks,
Razvan
I have been using GAPipeline and ELAND for about a year with no major problems - mainly for the QC of the runs.
Recently, we started to see a weird pattern in the Summary.htm generated by GERALD - in each lane, starting with a certain tile, there were 0 reads aligned to the genome in every tile after that (and the tile).
I looked at the output of the pipeline, and I didn't see any unusual output from ELAND - except an occasional error like this:
Code:
eland_28: /u01/illumina/GAPipeline-1.5.1/c++/programs/ELAND_outer.cpp:999: virtual void MatchTableMulti::print(OligoSource&, MatchPositionTranslator&, const std::vector<std::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::basic_string<char, std::char_traits<char>, std::allocator<char> > > >&, const std::vector<unsigned int, std::allocator<unsigned int> >&, const SuffixScoreTable&, int): Assertion `feof(pMatchType_)==0' failed. /bin/sh: line 3: 24462 Aborted /u01/illumina/GAPipeline-1.5.1/bin/eland_28 s_1_1_eland_query.txt /u01/db/genomes/Hs/hg18 s_1_eland_multi.txt.
We are running version 1.5.1 of GAPipeline, compiled on a x86_64 linux (CentOS 5.4) with 8 CPU cores and 32GB of RAM. It worked just fine before the v4 kits - and we were running 8 threads for both the pipeline (make -j 8) and ELAND (ELAND_MULTIPLE_INSTANCES 8)
The first thing that I tried was to run it with just one thread - but it made no difference.
Then, I looked at the content of the lanes and noticed that the tile where the problem starts is dependent on how many clusters (reads) there are in that lane.
For example, a lane where we had 34 million reads had the problem at tile 119, while a lane where we had about 40 million reads had the problem at tile 102 - which made me think that the read number is stored somewhere in the code on less than 36 bits (which is a weird number).
I looked back and I noticed that we never had more than 20million reads in one lane before the version 4 kits - and that's why we probably never hit this problem before, on the same hardware and pipeline software version.
To verify this hypothesis, we ran GERALD on half of the tiles (first half in one run and the second half in another run) and they were fine. The problem is that combining the output of the two GERALD runs is not trivial - we might have to merge the 2 directories and run the last stages of the pipeline again. but it's not a trivial solution.
It seems that the code needs some review for version 4 kits.
Tony, help please?
Thanks,
Razvan
Comment