HI:
i am new to chip-seq and I fellow tranditional way to analyze my data.
I normalized my two chip-seq data by total reads. (I divided genome into 50bp bin), total reads here means reads after bowtie and filter out redundancy.
I visualize it on IGV and found one sample is relative lower than the other on the genome-wide scale as well as some house keeping gene such as ACTB and GAPDH.
My major concern is the total reads after filter out redundancy is quite difference: one is 9.5M and the other is 14M.
My question is does the normalization by total reads could fail when two sample vary largely, and How?
i am new to chip-seq and I fellow tranditional way to analyze my data.
I normalized my two chip-seq data by total reads. (I divided genome into 50bp bin), total reads here means reads after bowtie and filter out redundancy.
I visualize it on IGV and found one sample is relative lower than the other on the genome-wide scale as well as some house keeping gene such as ACTB and GAPDH.
My major concern is the total reads after filter out redundancy is quite difference: one is 9.5M and the other is 14M.
My question is does the normalization by total reads could fail when two sample vary largely, and How?