SEQanswers

SEQanswers (http://seqanswers.com/forums/index.php)
-   Bioinformatics (http://seqanswers.com/forums/forumdisplay.php?f=18)
-   -   Bowtie2-build error writing to reference index file (http://seqanswers.com/forums/showthread.php?t=62056)

cz0013 08-17-2015 07:07 AM

Bowtie2-build error writing to reference index file
 
I have been trying to make an index file for bowtie2.

What I type is:
bowtie2-build sequence.fasta sequence

Then I get this error:
Settings:
Output files: "sequence.*.bt2"
Line rate: 6 (line is 64 bytes)
Lines per side: 1 (side is 64 bytes)
Offset rate: 4 (one in 16)
FTable chars: 10
Strings: unpacked
Max bucket size: default
Max bucket size, sqrt multiplier: default
Max bucket size, len divisor: 4
Difference-cover sample period: 1024
Endianness: little
Actual local endianness: little
Sanity checking: disabled
Assertions: disabled
Random seed: 0
Sizeofs: void*:8, int:4, long:8, size_t:8
Input files DNA, FASTA:
sequence.fasta
Building a SMALL index
Reading reference sizes
Error writing to the reference index file (.4.ebwt)
Time reading reference sizes: 00:00:01
Total time for call to driver() for forward index: 00:00:01
Error: Encountered internal Bowtie 2 exception (#1)
Command: bowtie2-build --wrapper basic-0 sequence.fasta sequence
Deleting "sequence.3.bt2" file written during aborted indexing attempt.
Deleting "sequence.4.bt2" file written during aborted indexing attempt.

If anyone knows what is causing this error help would be greatly appreciated.

GenoMax 08-17-2015 07:13 AM

That is a bit odd since you seem to have write permissions in the directory where you are running this command (is that correct)? What kind (length, multifasta)) of reference is in sequence.fasta?

Are you using the latest bowtie2 package?

cz0013 08-17-2015 07:26 AM

I do have write permissions, and I should be using the latest bowtie2 package. sequence.fasta is the the SL2.5 release of the tomato genome.

GenoMax 08-17-2015 07:59 AM

Can you try using a different "base name" for the index files (e.g. tomato instead of sequence)? One additional thing to try would be to force bowtie to create a "large" index by adding --large-index to command line.

cz0013 08-17-2015 08:47 AM

I am now running like this:
bowtie2 --large-index SL2.50_genome.fasta SL2.50_genome

But now I get this error message:
bowtie2 --large-index SL2.50_genome.fasta SL2.50_genome
(ERR): Cannot find the large index 0.1.bt2l
Exiting now ...

GenoMax 08-17-2015 08:53 AM

You need to do "bowtie2-build" first before you can use the index for alignments with bowtie2. Did the indexing part work?

Are you now ready to do the alignment (looks like it)?

cz0013 08-17-2015 08:58 AM

That is my bad. I should have been running bowtie2-build. I ran bowtie2 without realizing that it was the wrong command.I ran it again this time actually using bowtie2-build, and I ran into the same problem that I was having before. So I haven't gotten the indexing part to work.
Settings:
Output files: "SL2.50_genome.*.bt2l"
Line rate: 7 (line is 128 bytes)
Lines per side: 1 (side is 128 bytes)
Offset rate: 4 (one in 16)
FTable chars: 10
Strings: unpacked
Max bucket size: default
Max bucket size, sqrt multiplier: default
Max bucket size, len divisor: 4
Difference-cover sample period: 1024
Endianness: little
Actual local endianness: little
Sanity checking: disabled
Assertions: disabled
Random seed: 0
Sizeofs: void*:8, int:4, long:8, size_t:8
Input files DNA, FASTA:
SL2.50_genome.fasta
Building a LARGE index
Reading reference sizes
Error writing to the reference index file (.4.ebwt)
Time reading reference sizes: 00:00:01
Total time for call to driver() for forward index: 00:00:01
Error: Encountered internal Bowtie 2 exception (#1)
Command: bowtie2-build --wrapper basic-0 SL2.50_genome.fasta SL2.50_genome
Deleting "SL2.50_genome.3.bt2l" file written during aborted indexing attempt.
Deleting "SL2.50_genome.4.bt2l" file written during aborted indexing attempt.

westerman 08-17-2015 09:28 AM

Is it possible that you are running out of disk space? A quick and dirty way of checking is to start up the indexing process and in another window do something like:

for i in `seq 1 20`; do df -h; sleep 60

That way you will get 20 snapshots of disk space usage at 1 minute intervals. You should see the disk usage go up one one of your disks but, we hope, not to the limit.

cz0013 08-17-2015 09:31 AM

I think that was the problem. The server that I was running it on is out of space. I switched servers and now it is working. Thanks for the help guys, much appreciated.


All times are GMT -8. The time now is 12:47 AM.

Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2020, vBulletin Solutions, Inc.