SEQanswers

Go Back   SEQanswers > Bioinformatics > Bioinformatics



Similar Threads
Thread Thread Starter Forum Replies Last Post
Ngs data analysis bassu General 4 10-12-2015 03:28 AM
[NGS - analysis of gene expression data] Machine Learning + RNAseq data Chuckytah Bioinformatics 7 03-05-2012 03:16 AM
Haplotype analysis using NGS data jcgrant31 Bioinformatics 0 02-29-2012 08:02 AM
Looking for a few NGS-ers willing to share a bad experience about NGS data analysis CHoyt Bioinformatics 8 12-09-2011 11:06 PM
NGS data analysis survey steven Bioinformatics 7 09-21-2009 09:25 AM

Reply
 
Thread Tools
Old 10-08-2012, 10:27 AM   #1
anandksrao
Junior Member
 
Location: Sacramento

Join Date: Jun 2011
Posts: 9
Question Data filters at what stage of NGS data analysis?

Greetings friends!

I seek help with data that I have : 3 time points, 3 genotypes, 3 replicates for each of these = 27 libraries

The goal is to find genes that have different time expression profiles amongst 2 or more genotypes.

After our 1st round of data analysis, (including TMM normalization), the time course graphs and box plots were so noisy in terms of high std error at each time point, that it was hard to say if expression profile of one genotype was overlapping or distinct from that for the other genotypes! R code attached at bottom of this post.

So in short - we now need to employ data filters to check and reduce noise in our data. Some ideas are
removing genes that have low expression (count) levels
removing genes that have high variance across replicates
removing genes that have low variance across time (constitutively expressed genes are biologically less interesting)

So my question to you is what stage of my analysis do I employ these filters?
On the raw data itself, prior to normalization?
Or should I perform the TMM normalization, use the norm factors to transform my data to non-integer normalized counts and then filter (in which case I think I cannot fit them into negative binomial model, right?)

Code:
count = read.table("Input.txt", sep="\t", header=T)                     					
#$#$ read in raw count mapped data

f.count = count[apply(count[,-c(1,ncol(count))],1,sum) > 27,]                               
#$#$ filter ou genes with total read count < 27 across all libraries

f.dat = f.count[,-c(1,ncol(count))]                                                         
#$#$ select only read count, not rest of data frame

S = factor(rep(c("gen1","gen2","gen3"),rep(9,3)))                                           
#$#$ define group

Time = factor(rep(rep(c("0","10","20"),rep(3,3)),3))         								
#$#$ define time

Time.rep = rep(1:3,9)                                                                        
#$#$ define replicate

Group = paste(S,Time,Time.rep,sep="_")                                                         
#$#$ define group_time_replicate

library(edgeR)                                                                              
#$#$ load edgeR package

f.factor = data.frame(files = names(f.dat), S = S , Time = Time, lib.size = c(apply(f.dat,2,sum)),norm.factors = calcNormFactors(as.matrix(f.dat)))  
#$#$  make data for edgeR method

count.d = new("DGEList", list(samples = f.factor, counts = as.matrix(f.dat)))               
#$#$  make data for edgeR method

design = model.matrix(~ Time + S)                                                           
#$#$  make design data for edgeR method

count.d = calcNormFactors(count.d)                                                          
#$#$  Normalize TMM

glmfit.d = glmFit(count.d, design, dispersion = 0.1)                                        
#$#$  Fit the Negative Binomial Gen Lin Models

lrt.count = glmLRT(count.d, glmfit.d)                                                       
#$#$  Likelihood ratio tests

result.count = data.frame(f.count, lrt.count$table)                                         
#$#$  combining raw data and results from edgeR

result.count$FDR = p.adjust(result.count$p.value,method="BH")                               
#$#$  calculating the False Discovery Rate

write.table(result.count, "edgeR.Medicago_count_WT_Mu3.txt",sep="\t",row.names=F)           
#$#$  saving the combined data set
anandksrao is offline   Reply With Quote
Old 10-10-2012, 02:11 AM   #2
markrobinsonca
Junior Member
 
Location: Zurich

Join Date: Mar 2010
Posts: 7
Default

See this:
https://stat.ethz.ch/pipermail/bioco...er/048508.html
markrobinsonca is offline   Reply With Quote
Reply

Tags
data filter, model fitting, negative binomial, time course deseq, tmm normalization

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off




All times are GMT -8. The time now is 09:19 PM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2020, vBulletin Solutions, Inc.
Single Sign On provided by vBSSO