SEQanswers

SEQanswers (http://seqanswers.com/forums/index.php)
-   Bioinformatics (http://seqanswers.com/forums/forumdisplay.php?f=18)
-   -   edgeR questions (http://seqanswers.com/forums/showthread.php?t=70101)

dr63 07-05-2016 12:26 AM

edgeR questions
 
Hello,

I have some questions regarding my RNAseq analysis process via edgeR. I have followed the official book LINK but sometimes it is quite succinct maybe.

1/ how to interpret (or what do we expect to get) when making a geom_boxplot of the count table : log10(value) VS variables(samples) ?

-> I think we expect to have near the same shapes and variability for each samples.

But what if one of the sample differs? And how big has to be the variation to be considered?

At this time I also get “Warning message: Removed 630034 rows containing non-finite values (stat_boxplot).”

I read here that “it is ok to discard the zero values”.
But my table contains 1 019 835 observations. Is it acceptable to remove 630 034 of them? I am not sure to understand.

2/ Then, as suggested by the page 11 of the book I made pre-filtering on my data:

Code:

keep <- rowSums(cpm(y)>1) >= 3
y <- y[keep, , keep.lib.sizes=FALSE]

(By doing this I go from 68 000 contigs to 20 000).

Is it ok to put “3” on the first line since I have 3 replicates for each samples in my design?

I am not to understand what really happen on line 2 via the script y[blabla, ,blabla].

3/ I made the plotBCV. The book is quite unprecise on how to interpret this cloud of points and what to expect.

I also got "Disp = 0.04688 and BCV = 0.2165".

It is said that 40% of biological variability is frequent in human.
I am working on lettuce. What about my Disp and my BCV?
And how to interpret the shape of my plot "BCV VS Average log CPM"?
Is this step really meaningful?

4/ Finally, I questioned a Differential Gene Expression, and making filtering on "False Discovery Rate" and "logFC" :

Code:

lrt1 <- glmLRT(fit, contrast=my.contrasts[,"day7dark.vs.day7light"])
topT1 <-as.data.frame(topTags(lrt1,dim(table)[0]))
a<-topT1[topT1$FDR <0.05,]
b<-a[a$logFC >0.3,]
c<-b[order(-b$logFC),]

Here is a head result:

Quote:

| | logFC| logCPM| LR | PValue| FDR|
|Lsat_xxxx | 6.34| -0.86 | 135.89 | 0.00 | 0.00|
I am not sure to understand why I am still seeing here very low logCPM (-0.86) although I have already filtered my data sooner
Code:

keep <- rowSums(cpm(y)>1) >= 3
Is ti a kind of "mean logCPM" made on all the samples compared here?
If so, I think I have to filter it again and only focus my attention on the best “FDR + logCPM + logFC” looking genes… Am i true ?

I do not know what to do with “LR” in this table. Is it usefull to also filter my genes via LR ?

Thanks a lot for your help.


All times are GMT -8. The time now is 06:49 PM.

Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2019, vBulletin Solutions, Inc.