Hello,
I'm trying to assemble a bacterial genome ~4Mb, and am facing an unusual problem.
While the Kmerfreq script gives me an appropriate estimation of genome size (only in the manual calculation), when I run the same file on velvet, or soap denovo, the genome size comes to around 22MB.
The steps I use are as follows:
My data is Paired end so I have 2 starting files, 'fileA' and 'fileB'.
1. Dynamic Trim (downloaded from solexa) of fileA = result file is fileA.trimmed
2. Dynamic Trim of fileB
3. LengthSort.pl ( also downloaded from the Solexa website) of fileA.trimmed and fileB.trimmed. This step results in fileA.trimmed.discard, fileA.trimmed.single, fileA.trimmed.paired1 and fileA.trimmed.paired2
[All the resultant files are labelled fileA.trimmed.*]
4.Calculation of coverage of your genome. My genomes are 307X and 169X.
5. Use of a script that removes xfractions of your raw reads, so fraction 5 means, 1/5th of your reads will be left after this step.Use this script on both the *.paired1 and *.paired2 files
6. Combine the resultant files of step 5
7. Convert the combined file to fasta and run seqstat on it. Gives you the average size of reads, the largest read length etc.
8. Run Kmerfreq on this combined file. Here the size of the genome comes huge like 22Mb. However, when I do the calculation manually. [Download the result file of kmerfreq, open it in an excel spread sheet and do the following calculation
D = Depth of Cover
M= Peak kmer depth
L= average read length
K= kmer length (always 17)
D=(M*L)/(L-K+1)
G= genome size
N= total number of reads, (present in seqstat results)
B= sum of low frequency
G= (N*(L-K+1)-B)/D
9. Finally run velvet on the combined file or soapdenovo on the result of step 5 files.
So, Velvet estimates the total size of 22Mb, which is incorrect. A colleague suggested that it was because velvet cant handle more than 80X coverage data, but now I'm removing data to bring down the expected coverage to 20X, and even then, I still get a huge genome size.
Could this be due to contamination? If so, what is the best way to filter out the contaminants? Or, am I going wrong somewhere in the steps I'm using.
I would really appreciate any help or advice on this.
Thanx
I'm trying to assemble a bacterial genome ~4Mb, and am facing an unusual problem.
While the Kmerfreq script gives me an appropriate estimation of genome size (only in the manual calculation), when I run the same file on velvet, or soap denovo, the genome size comes to around 22MB.
The steps I use are as follows:
My data is Paired end so I have 2 starting files, 'fileA' and 'fileB'.
1. Dynamic Trim (downloaded from solexa) of fileA = result file is fileA.trimmed
2. Dynamic Trim of fileB
3. LengthSort.pl ( also downloaded from the Solexa website) of fileA.trimmed and fileB.trimmed. This step results in fileA.trimmed.discard, fileA.trimmed.single, fileA.trimmed.paired1 and fileA.trimmed.paired2
[All the resultant files are labelled fileA.trimmed.*]
4.Calculation of coverage of your genome. My genomes are 307X and 169X.
5. Use of a script that removes xfractions of your raw reads, so fraction 5 means, 1/5th of your reads will be left after this step.Use this script on both the *.paired1 and *.paired2 files
6. Combine the resultant files of step 5
7. Convert the combined file to fasta and run seqstat on it. Gives you the average size of reads, the largest read length etc.
8. Run Kmerfreq on this combined file. Here the size of the genome comes huge like 22Mb. However, when I do the calculation manually. [Download the result file of kmerfreq, open it in an excel spread sheet and do the following calculation
D = Depth of Cover
M= Peak kmer depth
L= average read length
K= kmer length (always 17)
D=(M*L)/(L-K+1)
G= genome size
N= total number of reads, (present in seqstat results)
B= sum of low frequency
G= (N*(L-K+1)-B)/D
9. Finally run velvet on the combined file or soapdenovo on the result of step 5 files.
So, Velvet estimates the total size of 22Mb, which is incorrect. A colleague suggested that it was because velvet cant handle more than 80X coverage data, but now I'm removing data to bring down the expected coverage to 20X, and even then, I still get a huge genome size.
Could this be due to contamination? If so, what is the best way to filter out the contaminants? Or, am I going wrong somewhere in the steps I'm using.
I would really appreciate any help or advice on this.
Thanx