Go Back   SEQanswers > Bioinformatics > Bioinformatics

Similar Threads
Thread Thread Starter Forum Replies Last Post
makeblastdb is making multiple sets of files, how can I fix it? rbruenn Bioinformatics 1 10-15-2015 11:14 AM
covaris protocol for large input mass gDNA? amplicus prime Illumina/Solexa 1 05-11-2015 12:50 PM
Script for breaking large .fa files into smaller files of [N] sequences lac302 Bioinformatics 3 02-21-2014 04:49 PM
PAGIT input files sivasubramani Bioinformatics 1 07-29-2013 03:34 PM
samtools sorting outfile is not as large as input file vinay052003 Bioinformatics 4 03-12-2012 09:03 AM

Thread Tools
Old 12-03-2015, 01:54 AM   #1
Junior Member
Location: Germany

Join Date: Jun 2015
Posts: 7
Default Running Makeblastdb with large input files


I am currently trying to make a blast'able database from a set of DNA reads. My reads-file is ~37GB in size and I was wondering if there is a way to speed up the makeblastdb step?

Is it possible to split my .fa file and run multiple instances of makeblastdb parallel without messing up my final db-file? Is there a way to use more than just one core for the program?

thanks for any ideas.
voidnyx is offline   Reply With Quote
Old 12-03-2015, 02:28 PM   #2
Peter (Biopython etc)
Location: Dundee, Scotland, UK

Join Date: Jul 2009
Posts: 1,541

You could make several small databases, then combine them with an alias file like the NCBI do for the NR and NT databases etc.
maubp is offline   Reply With Quote

blast, large file, makeblastdb

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

All times are GMT -8. The time now is 06:56 AM.

Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2019, vBulletin Solutions, Inc.
Single Sign On provided by vBSSO