View Single Post
Old 01-09-2014, 03:57 PM   #3
Wallysb01
Senior Member
 
Location: San Francisco, CA

Join Date: Feb 2011
Posts: 286
Default

I have to second kmcarr’s suggest of Trinity’s own digital read normalization process. It took my 4 lanes worth of reads and cut it in about 1/10th the size. Now, there were some genes I picked up when assembling each sample individually that I didn’t find in the pooled and digitally down sampled assembly, so it has some caveats and I would suggest setting the max read depth as high as you machine will allow.

However, here are some other ideas.

1) Did you do any quality filtering/trimming? What about adapter and other contaminates clipping? That may help some too.

2) How much RAM does your machine have? Might it be easier to get access to something with more (i.e. blacklight?).

3) Have you thought about Trans-ABySS? After some quality filtering and dropping low occurring kmers, you should be able to assemble it on a reasonable number of nodes if you have access to a cluster.

4) What is your experimental set up? If its a time course, maybe you can just assemble first, middle and end points and skip some samples in-between? There is no reason to have to include all your samples if you think you have enough data from a few to get a representative assembly. You could even do several of these partial assemblies, with different samples, and check to see if you’re finding the same genes. My guess is that in terms of the quality of your assembly you’ll hit a saturation point where more samples doesn’t give better results, just longer run times and higher RAM usage.
Wallysb01 is offline   Reply With Quote