SEQanswers

Go Back   SEQanswers > Bioinformatics > Bioinformatics



Similar Threads
Thread Thread Starter Forum Replies Last Post
RNA seq data normalization question slny Bioinformatics 35 10-19-2016 05:32 AM
RNA Seq normalization harshinamdar Bioinformatics 39 03-16-2013 01:12 AM
RNA-Seq: GC-Content Normalization for RNA-Seq Data. Newsbot! Literature Watch 0 12-20-2011 02:00 AM
RNA-seq library DSN normalization megalodon RNA Sequencing 2 09-01-2011 01:44 AM
clarification of rna-seq normalization frymor Bioinformatics 3 08-06-2011 06:07 PM

Reply
 
Thread Tools
Old 05-02-2010, 11:42 PM   #41
cek
Junior Member
 
Location: France

Join Date: Jan 2010
Posts: 4
Default

I order to use DESeq to test for differential expression between my RNAseq conditions, I calculate raw read counts by transcript using cufflinks output with the following formulae (as proposed by RockChalkJayhawk) :
raw1 = FPKM * length (kb) * number of mapped reads (million).

However, on another seqanswers post (http://seqanswers.com/forums/showthr...links+coverage), Cole Trapnell suggests to calculate raw read counts like this :
raw2 = coverage * length (from transcripts.expr file)

However, these two calculations do not lead to the same result. Have someone notice the same difference in their data ?

Last edited by cek; 05-12-2010 at 02:18 AM.
cek is offline   Reply With Quote
Old 05-31-2010, 07:44 PM   #42
answersseq
Junior Member
 
Location: pa

Join Date: Feb 2009
Posts: 7
Default

I do learn a lot from discussion here.
Any opinions on microRNA sequencing data? Their length is similar but many reads can be mapped to multiple locations (or mature miRs).
To compare differential expression between cell lines, tissue, I guess we would expect big difference, as well as no house keeping miRs..


Quote:
Originally Posted by Simon Anders View Post
In case this got lost in my lengthy post #12:

The reason why raw counts are advantageous to FPKM values for statistical inference is discussed in this thread, from post #6 onwards: http://seqanswers.com/forums/showthread.php?t=4349
answersseq is offline   Reply With Quote
Old 11-15-2010, 12:46 PM   #43
hypatia
Junior Member
 
Location: NY

Join Date: Oct 2010
Posts: 5
Default normalization with all or uniquely map reads

Hi Zee
I was wondering If you got the answer to this question. Is it 3??
should I elminate the unaligned or ambiguous maps out of the normalization?


"I've read about people doing counts as reads per million and log transforming these values to fit Poisson distribution, but it's sprung multiple ideas in my mind. Would this be as simple as dividing my counts for each experiment by
1) 1 Million
2) the total number of reads sequenced
3) the total number of uniquely mapped reads

I'm inclined to option (3) because that represents the amount of usable sequence data."
hypatia is offline   Reply With Quote
Old 11-15-2010, 07:27 PM   #44
zee
NGS specialist
 
Location: Malaysia

Join Date: Apr 2008
Posts: 249
Default

I would go with uniquely mapped reads because it's a more accurate representation of how much sequence data you obtained from your runs.
You could get a bit more stringent by using Picard to filter out possible PCR duplicates from the alignments in BAM format.
zee is offline   Reply With Quote
Old 04-10-2011, 09:21 PM   #45
syambmed
Junior Member
 
Location: malaysia

Join Date: Mar 2011
Posts: 5
Default

Hi guys,

I have trancriptome data from Illumina and am using CLC Genomic workbench for data analysis. I dont know or not familiar with other programs for transcriptome analysis. the data are from 1 sample of control cells and 1 sample of treated cells (no replicate for each sample) and I am looking for differently express genes.

The problem is normalization step. There are 3 types of normalization method offered by the software 1) scaling [option for normalization value= mean or median, baseline = median mean or median median] 2) quantile 3) total reads per 1million.

I dont know which one to choose..T_T Help me..

Then there are statistical tests on Gaussion data or on proportions. How to know that my data is suitable for which test..? I read that mostly people use Baggerley's.

A thing with Baggerley test is that the test outcome have p-value and false discovery rate (FDR) p-value correction. which one is used for determining differentially express genes..?

Thank you.
syambmed is offline   Reply With Quote
Old 04-10-2011, 10:44 PM   #46
marcora
Member
 
Location: Pasadena, CA USA

Join Date: Jan 2010
Posts: 52
Default

Quote:
Originally Posted by syambmed View Post
Hi guys,
I have trancriptome data from Illumina and am using CLC Genomic workbench for data analysis. I dont know or not familiar with other programs for transcriptome analysis. the data are from 1 sample of control cells and 1 sample of treated cells (no replicate for each sample) and I am looking for differently express genes.
If I was a reviewer I would doubt any conclusion coming from an experiment with no biological replicates. Anyhow, DESeq allows for such design, you may wanna consider it. I am not familiar with CLC Genomic workbench.
marcora is offline   Reply With Quote
Old 04-12-2012, 01:23 AM   #47
sikidiri
Member
 
Location: france

Join Date: May 2011
Posts: 13
Default Method to categorize mRNA-seq data based upon expression value

Hello All,
I have a pre processed mRNA-seq data for hg19 genome, in which for each gene the RPKM value is calculated. The value ranges from 99960 to zero. I have just one sample.
I want to categorise these genes into highly, medium and weakly expressed genes.
What could be the best way to do it?
Your suggestions would be highly appreciated.
Thanks a lot.
sikidiri is offline   Reply With Quote
Old 04-12-2012, 01:40 AM   #48
steven
Senior Member
 
Location: Southern France

Join Date: Aug 2009
Posts: 269
Default

Quote:
Originally Posted by sikidiri View Post
Hello All,
I have a pre processed mRNA-seq data for hg19 genome, in which for each gene the RPKM value is calculated. The value ranges from 99960 to zero. I have just one sample.
I want to categorise these genes into highly, medium and weakly expressed genes.
What could be the best way to do it?
Your suggestions would be highly appreciated.
Thanks a lot.
As RPKM is a normalized expression measurement so you can in theory directly compare values between genes within the same sample -keeping in mind a couple of reported biases like the size of the gene, GC content, etc.

I would first sort the values and use percentiles ("tiers") to define categories with a similar population and inspect the threshold values.
You may also want to consider absolute thresholds (like "RPKM<1", "1<RPKM<10" and "10<RPKM") but I do not know if there are "standards" for such values and I actually doubt that it is in practice reasonable to use values obtained from different protocols/conditions/software/etc..
steven is offline   Reply With Quote
Old 04-12-2012, 01:47 AM   #49
sikidiri
Member
 
Location: france

Join Date: May 2011
Posts: 13
Default

Hello Steven,
Thanks for your answer. But how to decide about the threshold to make these categories based upon expression is my main problem. Do you think any statistical tests would help me. Any paper/example would help me understand this better.
Thanks again.
sikidiri is offline   Reply With Quote
Old 04-12-2012, 02:39 AM   #50
dpryan
Devon Ryan
 
Location: Freiburg, Germany

Join Date: Jul 2011
Posts: 3,480
Default

It's been a while since I've done it, but if you google "cluster optimal group number" you can find methods for gap calculations and other things to find an optimal cluster number. I recall there being R packages for a lot of this stuff, such as the cluster package.
dpryan is offline   Reply With Quote
Old 09-07-2012, 01:37 PM   #51
carmeyeii
Senior Member
 
Location: Mexico

Join Date: Mar 2011
Posts: 137
Default

Quote:
Originally Posted by lpachter View Post
Thats correct- the procedure RCJ suggests will give you an estimate of the actual tag count for each transcript.

Is this to say that if one sums up the (FPKM * length in kb * reads mapped in millions ) of each transcript in a gene, one would obtain the total *estimated* read count for that gene?

But this has to be done individually for each transcript and then grouped into a gene, right?


Carmen
carmeyeii is offline   Reply With Quote
Old 12-12-2012, 03:30 AM   #52
lpachter
Member
 
Location: Berkeley, cA

Join Date: Feb 2010
Posts: 40
Default

Quote:
Originally Posted by Simon Anders View Post

If I don't care about isoforms or think that my coverage is too low to distinguish isoforms anyway, I expect to get optimal power by simply summing everything up.
Please see Figure 1 of http://www.nature.com/nbt/journal/va...0.html#/figure

Quote:
Originally Posted by Simon Anders View Post

Cuffdiff is, as I understand it, designed to deal with such issues, while our approach ignores them. I expect that DESeq, in compensation for being unsuitable to detect differences in isoform proportions as in your example, achieves much better detection power for differences in total expression (per gene, summing over isoforms), especially at very low counts.
Please see Figure 3 of
http://www.nature.com/nbt/journal/va...0.html#/figure

Quote:
Originally Posted by Simon Anders View Post

As I am not clear on how biological noise is taken into account by cuffdiff I cannot be fully sure whether this expectation will hold (and I'm quite curious to learn more about cuffdiff once your paper is out).
Please see
http://www.nature.com/nbt/journal/va...0.html#/figure
lpachter is offline   Reply With Quote
Old 12-12-2012, 05:44 AM   #53
jparsons
Member
 
Location: SF Bay Area

Join Date: Feb 2012
Posts: 62
Default

Quote:
Originally Posted by lpachter View Post
It's important to note the limitations of raw-count methods, but has anyone done anything to check to see if any of the isoform-detection algorithms can actually discriminate between isoforms well enough to assign those counts properly? I've seen simulated data that showed RSEM incapable of reproducing 'truth' half the time with even simple 2-isoform mixes.

Cufflinks' model in figure 2 has three times more counts than figure 1 and doesn't differentiate anywhere near as cleanly between isoforms. Surely maximum-likelihood count assignment can be incorrect, too, given ambiguous reads? Looking at the supplementals, however, I'm inclined to accept that it may be incorrect less often than raw counts when dealing with real data.
jparsons is offline   Reply With Quote
Reply

Tags
normalization, rna-seq

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off




All times are GMT -8. The time now is 05:38 AM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2019, vBulletin Solutions, Inc.
Single Sign On provided by vBSSO