We are having some trouble with our ChIP-seqs: we seem to be completely blind to AT rich regions. When I check the GC distribution in the ChIP and Input reads I see that the input has maybe a 2% difference with the theoretical genomic reads, and in the IP reads the difference has increased to between 5 and 8%.
We sonicate with a bioruptor, so I get that we won't ever get close to Covaris style temperature control. But what befuddles me is how the IP could have depleted AT rich sequences even more.
Library prep was exactly the same for input and IP samples starting with cross-link reversal (65C o/n), then NEBnext library prep with only bead size selection, no gel purifications, and final amplification with 2x Phusion (NEB).
Any advice will be appreciated. Also as an aside, how much of a GC% difference can current GC-normalization algorithms solve? How much is a complete write-off?
We sonicate with a bioruptor, so I get that we won't ever get close to Covaris style temperature control. But what befuddles me is how the IP could have depleted AT rich sequences even more.
Library prep was exactly the same for input and IP samples starting with cross-link reversal (65C o/n), then NEBnext library prep with only bead size selection, no gel purifications, and final amplification with 2x Phusion (NEB).
Any advice will be appreciated. Also as an aside, how much of a GC% difference can current GC-normalization algorithms solve? How much is a complete write-off?
Comment