View Single Post
Old 11-24-2013, 02:00 PM   #1
Junior Member
Location: St Louis

Join Date: Nov 2013
Posts: 1
Default edgeR spliceVariants: gene- and exon-level dispersion


I'm trying to detect alternative splicing between 2 experimental conditions
using edgeR's spliceVariants (and DE(X)Seq).

For each gene, spliceVariants uses a single dispersion calculated by
estimateExonGeneWiseDisp, which simply aggregates all exon counts
within a gene and calculates a per-gene dispersion based on those
aggregated counts. This seems highly anti-conservative (i.e., gives
extremely low dispersions). The counts being fit are exon-level counts--
i.e., smaller numbers with larger dispersions. Am I missing some theoretical
or intuitive justification for this choice? Wouldn't a less severe
anti-conservative choice be the min dispersion across all exons within
the gene (still larger than that provided by estimateExonGeneWiseDisp)?
While an intuitive conservative choice is the max?

If I understand this statistical framework correctly, I should be able to use
a per-exon dispersion--clearly this is possible if I take my tags to be exons,
but in theory it should also be possible in the spliceVariants scenario in
which the tags are genes, though the counts represent exons. DEXSeq
appears to be doing this. Is there a straightforward means of doing this within edgeR? The interface to glmFit seems to preclude it.

Thank you,
Brian is offline   Reply With Quote