Well... if there is a reference, I would compare them to the reference. With or without a reference, I think Quast is fairly useful. It tells you the number of predicted genes and so forth in your assembly; generally, the more longer genes you have and the fewer shorter genes, the higher the assembly quality. Most useful if you have more than one assembly. If you have a reference, it will also tell you the number of misassemblies.
Similarly, if you map all of the source reads to the assembly, then the number of non-match symbols (insertions, deletions, substitutions) correlates with assembly quality, as does % of reads mapped, % paired, and % unambiguous mappings. We use BBMap for that purpose here, though again, it's better for determining the relative quality of two assemblies than the absolute quality of one assembly. You can also try ALE, which attempts to judge the probability of an assembly being correct given the reads. This is based on mismatches, deviation of paired insert size from normal, and coverage depth.
If you have EST data, then calculating the EST capture rate is also very useful in determining genome completeness and misassemblies.
Incidentally, I think Q30 is much too high for trimming. Also, what trimming tool are you using? In my tests, the top performing ones use the phred algorithm (this includes seqtk and my own); all other algorithms were consistently inferior. The attached graph shows error rate of reads versus bases remaining from the initial 150Mbp dataset after quality-trimming; each point represents a different quality cutoff (so the leftmost point is trimming to Q40, then Q39, etc). The black line is the best, but that uses error-correction AND quality-trimming; all the others just use quality-trimming. seqtk and bbtrim appear to be identical. The error rates were calculated by mapping back to the Pedobacter reference.
Similarly, if you map all of the source reads to the assembly, then the number of non-match symbols (insertions, deletions, substitutions) correlates with assembly quality, as does % of reads mapped, % paired, and % unambiguous mappings. We use BBMap for that purpose here, though again, it's better for determining the relative quality of two assemblies than the absolute quality of one assembly. You can also try ALE, which attempts to judge the probability of an assembly being correct given the reads. This is based on mismatches, deviation of paired insert size from normal, and coverage depth.
If you have EST data, then calculating the EST capture rate is also very useful in determining genome completeness and misassemblies.
Incidentally, I think Q30 is much too high for trimming. Also, what trimming tool are you using? In my tests, the top performing ones use the phred algorithm (this includes seqtk and my own); all other algorithms were consistently inferior. The attached graph shows error rate of reads versus bases remaining from the initial 150Mbp dataset after quality-trimming; each point represents a different quality cutoff (so the leftmost point is trimming to Q40, then Q39, etc). The black line is the best, but that uses error-correction AND quality-trimming; all the others just use quality-trimming. seqtk and bbtrim appear to be identical. The error rates were calculated by mapping back to the Pedobacter reference.
Comment