SEQanswers

Go Back   SEQanswers > Applications Forums > De novo discovery



Similar Threads
Thread Thread Starter Forum Replies Last Post
RNA-Seq de novo assembly Velvet, erros in multi-kmers magarolo RNA Sequencing 0 12-07-2012 06:32 AM
De Novo Assembly of a transcriptome Neil De novo discovery 82 02-28-2012 10:44 AM
De Novo Transcriptome Assembly QC Noremac General 0 05-19-2011 12:02 PM
de novo transcriptome assembly Niharika Introductions 8 02-07-2011 06:29 AM
de novo transcriptome assembly chenjy RNA Sequencing 4 12-07-2010 12:54 AM

Reply
 
Thread Tools
Old 01-09-2014, 11:15 AM   #1
joeseki
Junior Member
 
Location: Maryland

Join Date: Dec 2011
Posts: 3
Default De Novo Transcriptome Assembly from Kmers only

Background:

I have RNA-Seq data from 8 time points comparing study to control. Each sample has been processed one sample per lane. I do not trust the reference sequence. What I'm interested in is the most significantly changing transcripts.

Hypothesis.

The kmers that are associated with each transcript should change in a coherent manner as the transcript expression changes. Comparing unique kmers first and extracting the most significantly changing kmers should enrich the transcriptome for the genes that are changing the most.

Problem:
I have too much data to do a de-novo assembly. It's quite good quality and even eliminating low frequency reads (kmers) I still have too much data to feed to an assembler like Trinity.

Question.
Assuming I can select a much smaller set of kmers that are significantly changing. How would I feed the resulting set to an assembler to generate a transcriptome of enriched genes?

Caveats.
I don't mind if the contigs that are created from this process results in partial exons associated with the genes that are changing. I can identify them later.

Soooo

1) How would you process a set of kmers to feed to an assembler resulting in a fasta file of contigs

2) If your answer suggests mapping the kmer back to the source read -- can you also suggest how you would do that efficiently (realistically in a decent time frame)

All thoughts are welcome

Joe Carl
joeseki is offline   Reply With Quote
Old 01-09-2014, 02:11 PM   #2
kmcarr
Senior Member
 
Location: USA, Midwest

Join Date: May 2008
Posts: 1,172
Default

Instead of trying to pre-select kmers you deem potentially interesting (which is just another way to say pre-biasing your result) you should look at using digital normalization of your data. Digital normalization effectively reduces the input data size and thus the required computational requirements by reducing the input of high abundance kmers. Logically, the 1000th copy of the same kmer does not add any new information to an de novo transcript assembly so you can safely remove copies of kmers above a certain threshold without adversely effecting your assembly.

Trinity has a digital normalization module built in, described here.

There is also digital normalization functionality in Titus Brown's khmer suite. Some links:

http://ged.msu.edu/angus/diginorm-2012/tutorial.html
http://ivory.idyll.org/blog/what-is-diginorm.html
kmcarr is offline   Reply With Quote
Old 01-09-2014, 04:57 PM   #3
Wallysb01
Senior Member
 
Location: San Francisco, CA

Join Date: Feb 2011
Posts: 286
Default

I have to second kmcarr’s suggest of Trinity’s own digital read normalization process. It took my 4 lanes worth of reads and cut it in about 1/10th the size. Now, there were some genes I picked up when assembling each sample individually that I didn’t find in the pooled and digitally down sampled assembly, so it has some caveats and I would suggest setting the max read depth as high as you machine will allow.

However, here are some other ideas.

1) Did you do any quality filtering/trimming? What about adapter and other contaminates clipping? That may help some too.

2) How much RAM does your machine have? Might it be easier to get access to something with more (i.e. blacklight?).

3) Have you thought about Trans-ABySS? After some quality filtering and dropping low occurring kmers, you should be able to assemble it on a reasonable number of nodes if you have access to a cluster.

4) What is your experimental set up? If its a time course, maybe you can just assemble first, middle and end points and skip some samples in-between? There is no reason to have to include all your samples if you think you have enough data from a few to get a representative assembly. You could even do several of these partial assemblies, with different samples, and check to see if you’re finding the same genes. My guess is that in terms of the quality of your assembly you’ll hit a saturation point where more samples doesn’t give better results, just longer run times and higher RAM usage.
Wallysb01 is offline   Reply With Quote
Old 01-10-2014, 04:55 AM   #4
rskr
Senior Member
 
Location: Santa Fe, NM

Join Date: Oct 2010
Posts: 250
Default

It's too bad SGA is still in experimental phase, and it appears that it is making genome type assumptions about coverage, otherwise for read correcting and data reduction it is an awesome assembler. Maybe you could just use the read correcting and reduction feature.

The problem with Kmer reductions, is precisely that if there are lots of data to begin with you get a situation where there are more errors than signal, since the signal is more likely to be redundant therefore discarded, especially in high coverage areas.
rskr is offline   Reply With Quote
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off




All times are GMT -8. The time now is 07:19 PM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2019, vBulletin Solutions, Inc.
Single Sign On provided by vBSSO