SEQanswers

SEQanswers (http://seqanswers.com/forums/index.php)
-   De novo discovery (http://seqanswers.com/forums/forumdisplay.php?f=27)
-   -   De Novo Transcriptome Assembly from Kmers only (http://seqanswers.com/forums/showthread.php?t=39789)

joeseki 01-09-2014 10:15 AM

De Novo Transcriptome Assembly from Kmers only
 
Background:

I have RNA-Seq data from 8 time points comparing study to control. Each sample has been processed one sample per lane. I do not trust the reference sequence. What I'm interested in is the most significantly changing transcripts.

Hypothesis.

The kmers that are associated with each transcript should change in a coherent manner as the transcript expression changes. Comparing unique kmers first and extracting the most significantly changing kmers should enrich the transcriptome for the genes that are changing the most.

Problem:
I have too much data to do a de-novo assembly. It's quite good quality and even eliminating low frequency reads (kmers) I still have too much data to feed to an assembler like Trinity.

Question.
Assuming I can select a much smaller set of kmers that are significantly changing. How would I feed the resulting set to an assembler to generate a transcriptome of enriched genes?

Caveats.
I don't mind if the contigs that are created from this process results in partial exons associated with the genes that are changing. I can identify them later.

Soooo

1) How would you process a set of kmers to feed to an assembler resulting in a fasta file of contigs

2) If your answer suggests mapping the kmer back to the source read -- can you also suggest how you would do that efficiently (realistically in a decent time frame)

All thoughts are welcome

Joe Carl

kmcarr 01-09-2014 01:11 PM

Instead of trying to pre-select kmers you deem potentially interesting (which is just another way to say pre-biasing your result) you should look at using digital normalization of your data. Digital normalization effectively reduces the input data size and thus the required computational requirements by reducing the input of high abundance kmers. Logically, the 1000th copy of the same kmer does not add any new information to an de novo transcript assembly so you can safely remove copies of kmers above a certain threshold without adversely effecting your assembly.

Trinity has a digital normalization module built in, described here.

There is also digital normalization functionality in Titus Brown's khmer suite. Some links:

http://ged.msu.edu/angus/diginorm-2012/tutorial.html
http://ivory.idyll.org/blog/what-is-diginorm.html

Wallysb01 01-09-2014 03:57 PM

I have to second kmcarr’s suggest of Trinity’s own digital read normalization process. It took my 4 lanes worth of reads and cut it in about 1/10th the size. Now, there were some genes I picked up when assembling each sample individually that I didn’t find in the pooled and digitally down sampled assembly, so it has some caveats and I would suggest setting the max read depth as high as you machine will allow.

However, here are some other ideas.

1) Did you do any quality filtering/trimming? What about adapter and other contaminates clipping? That may help some too.

2) How much RAM does your machine have? Might it be easier to get access to something with more (i.e. blacklight?).

3) Have you thought about Trans-ABySS? After some quality filtering and dropping low occurring kmers, you should be able to assemble it on a reasonable number of nodes if you have access to a cluster.

4) What is your experimental set up? If its a time course, maybe you can just assemble first, middle and end points and skip some samples in-between? There is no reason to have to include all your samples if you think you have enough data from a few to get a representative assembly. You could even do several of these partial assemblies, with different samples, and check to see if you’re finding the same genes. My guess is that in terms of the quality of your assembly you’ll hit a saturation point where more samples doesn’t give better results, just longer run times and higher RAM usage.

rskr 01-10-2014 03:55 AM

It's too bad SGA is still in experimental phase, and it appears that it is making genome type assumptions about coverage, otherwise for read correcting and data reduction it is an awesome assembler. Maybe you could just use the read correcting and reduction feature.

The problem with Kmer reductions, is precisely that if there are lots of data to begin with you get a situation where there are more errors than signal, since the signal is more likely to be redundant therefore discarded, especially in high coverage areas.


All times are GMT -8. The time now is 05:16 PM.

Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2020, vBulletin Solutions, Inc.