SEQanswers

SEQanswers (http://seqanswers.com/forums/index.php)
-   Metagenomics (http://seqanswers.com/forums/forumdisplay.php?f=29)
-   -   Minimum amount of data needed for reliable results? (http://seqanswers.com/forums/showthread.php?t=73030)

bloosnail 12-10-2016 12:05 PM

Minimum amount of data needed for reliable results?
 
We are trying to do analysis for whole genome metagenomic data taken from the surface of the eye. Each sample has millions of reads generated, but of those reads at most only 1-2% are bacterial reads. We are wondering if there is some information/resources about the amount of data available related to the reliability of the results eg. finding out the taxonomic information for bacteria down to species level that are greater than 1% relative abundance. Currently we are aligning the data to whole genome bacterial sequences, but there are many multi-mapping locations which many of which may be false positives. We have tried using Metaphlan2 to do alignment which uses a custom catalog of unique markers for different clades, but usually only several hundred reads will be mapped back -- many of the samples report very low/no species present. Specifically, we are wondering methods to do analysis for whole genome metagenomic sequences where the amount of data is very low. Any help is greatly appreciated.

Daniel

Brian Bushnell 12-10-2016 12:50 PM

You might try removing human sequence, then assembling the rest and BLASTing the contigs against nt/nr/RefSeq microbial. Assuming the contigs are longer than read length, they will give you more reliable hits. What kind of depth do you have for the bacteria? You can find that out with a kmer-frequency histogram, after human reads are removed.

bloosnail 12-10-2016 09:59 PM

Thank you for the quick response. The idea of assembling the reads into contigs before alignment makes sense, I will let me supervisor know. Do you know of good software to do this? I have tried Velvet in the past but did not use it extensively.

I forgot to mention that we have removed human sequences, although the revised reference genome that you created seems like it would be especially useful for us.

Could you give more information on how to estimate the depth of the bacteria? There is generally less than 100,000 bacterial reads per sample out of 20-30 million initial reads (before any trimming/contaminant removal).

Brian Bushnell 12-11-2016 08:42 AM

I suggest Spades or Megahit for metagenome assembly. 100k is not many reads; you might not have sufficient depth for assembly. But in that case, you may get a better assembly by combining all bacterial reads from all samples and assembling together. Then you can quantify by mapping to the combined assembly.

For human removal, the raw human genome is fine in your case (bacteria). The masked version is mainly to allow decontamination of eukaryotes, which have shared sequence with human; bacteria basically don't.

gringer 12-11-2016 09:27 AM

You can create rarefaction curves to see if what you have is likely sufficient to describe the metagenomic profile.

The basic process is to remove reads and see if your calculation of the species diversity is similar. A low complexity sample will plateau at a low coverage, while the diversity of a high complexity sample will just keep increasing substantially with more reads.

dhtaft 12-12-2016 03:30 PM

I had some luck using IMSA in a similar situation to the one you describe, but only after human read removal


All times are GMT -8. The time now is 01:57 PM.

Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2018, vBulletin Solutions, Inc.