SEQanswers

Go Back   SEQanswers > Applications Forums > RNA Sequencing



Similar Threads
Thread Thread Starter Forum Replies Last Post
Galaxy tool to filter low mapping quality reads? kwoweiho Bioinformatics 3 08-08-2013 06:56 AM
RNA-Seq: Low mapping percentage of pair-end reads (length 75bp) wilson90 Bioinformatics 6 03-21-2013 09:31 AM
Samtools flagstat - low % reads mapping nr23 Bioinformatics 5 11-01-2012 09:04 AM
Metagenomic assembly (filter low complexity reads) rsinha Bioinformatics 0 10-24-2012 01:24 PM
get wrong when I do de novo assembly by trinity for two 22.6G reads file taoxiang180 Bioinformatics 4 08-20-2012 10:52 PM

Reply
 
Thread Tools
Old 11-18-2013, 06:49 AM   #1
horvathdp
Member
 
Location: Fargo

Join Date: Dec 2011
Posts: 66
Default Low mapping of reads to trinity assembly

Hi all,

I am using iPlant resources, and I just finished doing an assembly of my RNAseq data using trinity (great results- ~200K contigs with N50 of ~1500, and mean of ~900- CEGMA giving 241 of the 248 conserved sequences as complete). With this in hand I used tophat2 to map my fragments as the first step to ID deferentially expressed transcripts. However, I am only getting about 38% of my fragments mapping to the Trinity contigs for any given sample. Is this expected? Is there a better way to do this? Any advice would be appreciated.

Dave
horvathdp is offline   Reply With Quote
Old 11-18-2013, 07:17 AM   #2
flyingoyster
Member
 
Location: NJ

Join Date: Aug 2011
Posts: 10
Default

I had similar problem, but even lower mapping rate (about 2%). What could be the problem? If you figure it out, please let know. Thanks!
flyingoyster is offline   Reply With Quote
Old 11-18-2013, 02:35 PM   #3
horvathdp
Member
 
Location: Fargo

Join Date: Dec 2011
Posts: 66
Default More thoughts

I think one of my problems is that at least 25% of the reads are too short to allow mapping of the paired ends. I am running TopHat2-SE to see if that significantly improves the percentage of mapped reads. I also assume that I lost any reads that didn't form contigs of at least 200 bases. I am not sure how to figure out what percentage that is. Any ideas from the experts out there?

Finally, one more question for the experts: I plan on using the tuxedo suite to analyze my data- Mapping my trimmed sequences to the trintity contigs with tophat, usng Cuffmerge and Cuffdiff to ID differentially expressed genes. Is this a reasonable approach? Are there better approaches for mapping back to a denovo assembled transcriptome? Are there any problems I will likely encounter? Any advice will be appreciated.
horvathdp is offline   Reply With Quote
Old 11-19-2013, 07:30 AM   #4
ddb
Member
 
Location: Europe

Join Date: Feb 2012
Posts: 13
Default

What is the length of your reads and expected insert size / fragment length? The tuxedo suite is used when you have a genome sequence as reference. I do not think it is the right tool for denovo assembled contigs. For analyzing contigs I would look at Bowtie/Bowtie2 for mapping - RSEM/eXpress for abundance estimation and then one of the bioconductor tools for differential expression testing. I am sure others can suggest different tools. You could try following the Trinity protocol for downstream analysis.
ddb is offline   Reply With Quote
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off




All times are GMT -8. The time now is 11:11 PM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2020, vBulletin Solutions, Inc.
Single Sign On provided by vBSSO