Dear all
I'm trying to assemble 100GB of paired end short reads which do not map to the human genome using velvet.
In theory velvet assembles all size of input data but due to RAM limitation (64GB), I would like to assemble my reads in clusters.
There are suggestions on this issue already, like partitioning reads to random parts, cluster with k-mers...etc.
However, since my input data are missed reads, I believe they are scattered all around and not going to assemble well if they are randomly grouped.
So what is your suggestion? which program is worth a try?
thanks
I'm trying to assemble 100GB of paired end short reads which do not map to the human genome using velvet.
In theory velvet assembles all size of input data but due to RAM limitation (64GB), I would like to assemble my reads in clusters.
There are suggestions on this issue already, like partitioning reads to random parts, cluster with k-mers...etc.
However, since my input data are missed reads, I believe they are scattered all around and not going to assemble well if they are randomly grouped.
So what is your suggestion? which program is worth a try?
thanks
Comment