I was wondering a bit about the statistical models used by peak-calling software and how they compare to the models normally employed by packages designed for RNAseq. It's my understanding that most RNAseq tools for differential expression analysis. The way I understand a negative binomial distribution is that it's a lot like a Poisson distribution, except that a negative binomial distribution takes into account the fact that the mean/variance won't be equal across biological replicates.
From what I've read, some Chipseq tools use the negative binomial distribution, while others use the Hidden Markov Model (HMM). I read the wikipedia entry on HMM and found it... less than clear. The way I understand it, a Markov model refers to a series of discrete "states", where each state depends upon the previous state in a non-deterministic way. So, for example, a tree might be classified as "small", "medium" or "large". That tree has to go through the "medium" state before it can reach the "large" state, but just because a tree is "medium" doesn't mean it will ever become "large". The size of the tree is related to how rainy it was on a previous year, so if it was exceptionally rainy, there might be a 70% chance that the tree you observe will be large. When refering to a Hidden Markov Model, you refer to a Markov chain in which some of the states are hidden, or unobserved. So to use the tree example, if you didn't have the weather data for the previous year, you'd have to infer how rainy it was based upon how many "large" trees you observe.
Assuming that understanding isn't completely wrong, I don't really see how it applies to Chipseq. Can someone explain to me why this model is used in Chipseq analysis, and why some tools use negative binomial distribution instead?
From what I've read, some Chipseq tools use the negative binomial distribution, while others use the Hidden Markov Model (HMM). I read the wikipedia entry on HMM and found it... less than clear. The way I understand it, a Markov model refers to a series of discrete "states", where each state depends upon the previous state in a non-deterministic way. So, for example, a tree might be classified as "small", "medium" or "large". That tree has to go through the "medium" state before it can reach the "large" state, but just because a tree is "medium" doesn't mean it will ever become "large". The size of the tree is related to how rainy it was on a previous year, so if it was exceptionally rainy, there might be a 70% chance that the tree you observe will be large. When refering to a Hidden Markov Model, you refer to a Markov chain in which some of the states are hidden, or unobserved. So to use the tree example, if you didn't have the weather data for the previous year, you'd have to infer how rainy it was based upon how many "large" trees you observe.
Assuming that understanding isn't completely wrong, I don't really see how it applies to Chipseq. Can someone explain to me why this model is used in Chipseq analysis, and why some tools use negative binomial distribution instead?