SEQanswers

Go Back   SEQanswers > Applications Forums > Epigenetics



Similar Threads
Thread Thread Starter Forum Replies Last Post
small rna sequencing sample quality control entomology RNA Sequencing 2 01-16-2012 04:59 PM
negative control or not ronindan Epigenetics 6 12-07-2010 02:00 AM
Control sample in Next-gen sequencing cub103 Illumina/Solexa 5 06-10-2010 05:32 PM
Volunteers wanted! Sequencing Quality Control Project (SEQC) Joann Events / Conferences 2 10-09-2009 04:24 AM
Control DNA sequencing primer kmcarr Illumina/Solexa 4 05-23-2008 05:00 AM

Reply
 
Thread Tools
Old 09-17-2010, 12:00 PM   #1
captainentropy
Member
 
Location: San Francisco Bay Area

Join Date: Mar 2009
Posts: 89
Default Negative control sequencing

What are people in the field doing with respect to sequencing DNA captured in negative controls (IgG, preimmune serum, beads only, etc.)?

I hear very little about doing this. A labmate says she has talked with some labs doing major ChIP-seq projects and they always sequence the negative control.

Honestly, I don't always get measurable DNA from the neg control. One ChIP recently I recovered 480ng with the antibody and 7ng with the IgG+beads control and another was 580ng and 0ng. So, in these two cases it wouldn't even be possible. But many times I do pull down enough to make a library even though the enrichment of IP over negative is quite good.

Any thoughts/comments?
captainentropy is offline   Reply With Quote
Old 11-01-2010, 07:09 AM   #2
mudshark
Senior Member
 
Location: Munich

Join Date: Jan 2009
Posts: 138
Default

given the fact that i once calculated that about 2% of my reads in my IP library actually were derived of regions bound by my target I am really surprised that you end up with so much less in an IgG or bead-only control.

anyway 480 ng with the antibody is hell of a lot, what is your target?
mudshark is offline   Reply With Quote
Old 11-01-2010, 01:57 PM   #3
captainentropy
Member
 
Location: San Francisco Bay Area

Join Date: Mar 2009
Posts: 89
Default

mudshark, could you tell me how you determined that only 2% of your reads were bound by your target? Assuming you had negligible DNA bound could it be a really bad antibody then?

As for my stuff keep in mind the amount of DNA pulled down is a function of antigen abudnance, amount of chromatin in the IP, and antibody quality. In my case I wasn't using much chromatin, 10 ug, I believe, but the protein is of very high abundance in the cells we use. Based on our calculation from past (non-ChIP-seq) experiments it occupies about 2% of the genome. My results were much higher, true, but perhaps the previous estimates, which were made with BACs, was too low. I also use an uncommon method of chromatin preparation. After lysing my crosslinked cells I spin the lysate through a cushion of 8M urea at 45-55K RPM for ~7-12 hours. The advantage of this is that the pellet that emerges at the bottom of the tube is 100% crosslinked material. There is no RNA, no free DNA, and no unbound proteins. No unbound DNA means less background. No unbound proteins means higher true signal. The antibody is pretty fabulous too though. Very clean. We have a couple of custom antibodies that aren't as strong as this commercial one. So all of that results in a lot of ChIPed DNA.

A nearby fly lab routinely recovers microgram quantities of ChIPed DNA. But they start with milligrams of chromatin and are ChIPing histones and histone PTMs.
captainentropy is offline   Reply With Quote
Old 11-02-2010, 05:06 AM   #4
mudshark
Senior Member
 
Location: Munich

Join Date: Jan 2009
Posts: 138
Default

hi captainentropy

we usually chip from 50-60g chromatin and get around 5-10 ng DNA in IP. i would not say that the antibody is particularly inefficient, it is comparable to many others. in comparison to anti histone ABs the yield is of course very low.

about the 2% i mentioned. i basically know all my binding sites because i have a cool prior knowledge system and anyway mapped the stuff previously by ChIPchip.

.. quite interesting your chromatin prep protocol!
mudshark is offline   Reply With Quote
Old 11-02-2010, 06:07 AM   #5
supertjejen
Junior Member
 
Location: Denmark

Join Date: Oct 2010
Posts: 1
Default

hi captainentropy!

I'm having the same problem, I'm about to sequence and I have 50 ng in my IP and 4 in my negative control. for another antibody i have 230 ng in the IP and 1 ng in the negative control. I'm thinking about using the input as negative control instead apparently some do that. See "ChIP–seq: advantages and challenges of a maturing technology" Peter J. Park. But I'm knew to sequencing I'd appreciate feedback, have anyone any ideas?
supertjejen is offline   Reply With Quote
Old 11-02-2010, 06:17 AM   #6
mudshark
Senior Member
 
Location: Munich

Join Date: Jan 2009
Posts: 138
Default

very good point. we only sequence the input as a reference. of course that does not control for any bias in unspecific bead-binding etc. an IgG control has its own bias anyway.

in ChIPchip the standard is to use the input as a reference. why should this be a bad choice for ChIPSeq?
mudshark is offline   Reply With Quote
Old 11-02-2010, 03:41 PM   #7
captainentropy
Member
 
Location: San Francisco Bay Area

Join Date: Mar 2009
Posts: 89
Default

supertjejen, since most people will use ChIP conditions that result in no background (chromatin/DNA sticking to the beads or Fc region of the antibody), one can't very well sequence that background right? Presumably then all the ChIP-seq signal is due to DNA recovered from the immunoprecipitation. However, it's now clear that a lot of those peaks are the same as what is seen from sequencing a control sample that was not immunoprecipitated (input). So be careful not to use the terms "negative control" and "input" interchangeably as they are two different things. The sequencing of the input is a good idea in order to control for the patterns that arise due to the uneven distribution of crosslinks in the nucleus due to regions of heterochromatin (more proteinaceous and thus more crosslinked) and euchromatin (less proteinaceous and thus fewer crosslinks). The regions with fewer crosslinks will sonicate/digest and decrosslink easier. The stuff that is harder to sonicate will remain behind in the insoluble pellet (after spinning the sonicated samples at high speed) thus reducing any potential signal from those regions and enriching any potential signal in the regions that had less crosslinking. Of course there is additional bias in the size selection step. When you select a narrow range of shorter fragments (~200-250) those are the fragments that sonicated very well which probably came from the chromatin with fewer proteins and crosslinking. The larger fragments are likely from the chromatin that was more resistant to sonication (heterochromatin likely). A little bias here, a little bias there, now the true signal might be a little skewed depending on where your protein of interest resides. Maybe. That's my thinking on it.
captainentropy is offline   Reply With Quote
Old 02-11-2011, 09:21 AM   #8
NGene
Junior Member
 
Location: Germany

Join Date: Feb 2011
Posts: 8
Default

Quote:
Originally Posted by captainentropy View Post
The sequencing of the input is a good idea in order to control for the patterns that arise due to the uneven distribution of crosslinks in the nucleus due to regions of heterochromatin (more proteinaceous and thus more crosslinked) and euchromatin (less proteinaceous and thus fewer crosslinks). The regions with fewer crosslinks will sonicate/digest and decrosslink easier. The stuff that is harder to sonicate will remain behind in the insoluble pellet (after spinning the sonicated samples at high speed) thus reducing any potential signal from those regions and enriching any potential signal in the regions that had less crosslinking. Of course there is additional bias in the size selection step. When you select a narrow range of shorter fragments (~200-250) those are the fragments that sonicated very well which probably came from the chromatin with fewer proteins and crosslinking. The larger fragments are likely from the chromatin that was more resistant to sonication (heterochromatin likely). A little bias here, a little bias there, now the true signal might be a little skewed depending on where your protein of interest resides. Maybe. That's my thinking on it.
I am facing this issue too. I guess I am enriching parts of the genome which are less crosslinked, losing likely the heterochromatin (which is what I am interested in...).
I believe that my statement is true as I clearly see differences in reads abundance also for H3 and H2A histones (no modifications, just nude histone) IPs I used to normalize.

Do you have any suggestion to overcome this issue?

Or even better, would one be able to use such disadvantage to his own advantage?
In other words, could I assume that wherever I see a clear "valley" in H3/H2A-IP reads, I am "seeing heterochromatin" and thus I could analyze these areas differently (e.g. different threshold for my sample data sets) from the ones that are more enriched?
NGene is offline   Reply With Quote
Old 02-13-2011, 02:50 AM   #9
mudshark
Senior Member
 
Location: Munich

Join Date: Jan 2009
Posts: 138
Default

Quote:
Originally Posted by NGene View Post
Do you have any suggestion to overcome this issue?

Or even better, would one be able to use such disadvantage to his own advantage?
In other words, could I assume that wherever I see a clear "valley" in H3/H2A-IP reads, I am "seeing heterochromatin" and thus I could analyze these areas differently (e.g. different threshold for my sample data sets) from the ones that are more enriched?
You can overcome this problem by normalizing to the released chromatin (input or mock ip).

as regards your valleys in H3/H2A: watch out, as there are regions in chromatin that are rather depleted of histones, nucleosome free/depleted regions, often to be found in promoters, preferentially active ones.

the differential release of chromatin has actually be turned into an assay:
http://www.ncbi.nlm.nih.gov/pubmed/19303047
mudshark is offline   Reply With Quote
Old 02-13-2011, 03:00 AM   #10
Chipper
Senior Member
 
Location: Sweden

Join Date: Mar 2008
Posts: 324
Default

Here is another creative way of publishing non-random signals in the negative control...

http://www.ncbi.nlm.nih.gov/pubmed?term=sono-seq
Chipper is offline   Reply With Quote
Old 02-14-2011, 01:56 AM   #11
NGene
Junior Member
 
Location: Germany

Join Date: Feb 2011
Posts: 8
Default

Quote:
Originally Posted by mudshark View Post
You can overcome this problem by normalizing to the released chromatin (input or mock ip).

as regards your valleys in H3/H2A: watch out, as there are regions in chromatin that are rather depleted of histones, nucleosome free/depleted regions, often to be found in promoters, preferentially active ones.

the differential release of chromatin has actually be turned into an assay:
http://www.ncbi.nlm.nih.gov/pubmed/19303047
I am aware of regions that are depleted of histone/nucleosomes (especially active promoters, as you say). The point is that I am talking about very big domains (more than 5 Mbp) where the abundance is often 3-4 lower than a mean value I calculated.

Although I expected some "valleys", I did not expect them to be so large.

Thanks a lot for the reply, by the way.

PS: the two publications are brilliant!
NGene is offline   Reply With Quote
Old 02-14-2011, 02:09 PM   #12
frozenlyse
Senior Member
 
Location: Australia

Join Date: Sep 2008
Posts: 136
Default

Quote:
Originally Posted by NGene View Post
I am aware of regions that are depleted of histone/nucleosomes (especially active promoters, as you say). The point is that I am talking about very big domains (more than 5 Mbp) where the abundance is often 3-4 lower than a mean value I calculated.

Although I expected some "valleys", I did not expect them to be so large.

Thanks a lot for the reply, by the way.

PS: the two publications are brilliant!
I'm not sure what system you're using, but could you be seeing large genomic deletions? We see can see it contribute to both IP and Input signals in enrichment experiments eg http://www.ncbi.nlm.nih.gov/pubmed/21045081

Last edited by frozenlyse; 02-14-2011 at 02:10 PM. Reason: added pubmed link
frozenlyse is offline   Reply With Quote
Old 02-15-2011, 01:22 AM   #13
NGene
Junior Member
 
Location: Germany

Join Date: Feb 2011
Posts: 8
Default

Quote:
Originally Posted by frozenlyse View Post
I'm not sure what system you're using, but could you be seeing large genomic deletions? We see can see it contribute to both IP and Input signals in enrichment experiments eg http://www.ncbi.nlm.nih.gov/pubmed/21045081
I would exclude large deletions since the genome of the cell line I am using is sequenced and annotated. Besides, I assume I would get no reads if a deletion occurs (in a homogeneous population); I get lower number of reads, instead.

Anyways, I think that captainentropy pointed it out: I must be selecting (or at least, giving preference) to certain areas during the library prep.

I will keep investigating on this. Thanks for the replies!
NGene is offline   Reply With Quote
Old 02-26-2011, 01:08 AM   #14
ETHANol
Senior Member
 
Location: Western Australia

Join Date: Feb 2010
Posts: 308
Default

Quote:
Originally Posted by captainentropy View Post
I also use an uncommon method of chromatin preparation. After lysing my crosslinked cells I spin the lysate through a cushion of 8M urea at 45-55K RPM for ~7-12 hours. The advantage of this is that the pellet that emerges at the bottom of the tube is 100% crosslinked material.
I'd be really interested in knowing the details of your chromatin prep protocol.

As far as the issue of uneven chromatin fragmentation, my chromatin preps come out more when I fragment with micrococcal nuclease and the fragmentation seems to be more efficient, i.e. there is essentially no chromatin above 500 bp and it's simple to digest it all down to the mononucleosome size. There is evidence that it is easier on some epitopes as well. I can share what I do but I think it's too long to post so send me a PM with your e-mail if you like.
ETHANol is offline   Reply With Quote
Old 02-28-2011, 01:11 AM   #15
NGene
Junior Member
 
Location: Germany

Join Date: Feb 2011
Posts: 8
Default

I would be interested in knowing captainentropy prep protocol too.

Concerning your protocol, ETHANol, I would be glad to read it too. I am sending a pm with my e-mail, if you do not mind.
NGene is offline   Reply With Quote
Old 02-28-2011, 11:38 AM   #16
ETHANol
Senior Member
 
Location: Western Australia

Join Date: Feb 2010
Posts: 308
Default

I'll e-mail it now. If you don't see it check your spam folder. E-mails from Greece tend to go there. Essentially I follow this protocol with some modifications:
http://www.epigenome-noe.net/WWW/res....php?protid=10

Another good thing about micrococcal nuclease fragmentation is you don't end up with a small but significant amount of higher molecular weight fragments. Also it's easier on epitopes that may be subject to stripping from the chromatin.
ETHANol is offline   Reply With Quote
Old 03-10-2011, 09:13 AM   #17
Alex Clop
Member
 
Location: London

Join Date: Sep 2008
Posts: 15
Default

Hi all,

ETHANol,

I am also using the epigenome protocol for native chromatin with micrococcal digestion but don't know how to get to highest amount of chromatin together with only mononucleosomal bands.

By any chance, could you send me your protocol as well, please?

(I can pass you my email address)

Thanks in advance

Alex
Alex Clop is offline   Reply With Quote
Old 03-10-2011, 10:23 AM   #18
ETHANol
Senior Member
 
Location: Western Australia

Join Date: Feb 2010
Posts: 308
Default

Hi Alex,
First, I'm not doing native ChIP, although I have used the Epigenome-NOE native ChIP protocol and it works. I'm doing X-linked ChIP and fragmenting with micrococcal nuclease.

I'm posting two protocols as .pdf files. The first protocol "ChIP_Protocol.pdf", I've used for ChIP-seq and it works great. The inputs are very even and don't show any bias towards open chromatin which was a problem I had with sonication. There is a clear bias to GC rich DNA but I think that comes from the library amplification. I get between 1 and 40ng of DNA depending of the antibody/antigen. Another nice thing about enzymatic fragmentation is that you do not have to dilute your samples 10 fold after sonication to get the SDS levels down to 0.1%.

The second protocol "Quick_ChIP.pdf" I've only used for ChIP-qPCR. It works as well or better then the longer protocol. I haven't check what inputs look like when sequenced using this protocol although I am curious.

The protocols are more like some notes I wrote to give to my student that works at the next bench, not something ready distribute to beyond so be on the look out for obvious mistakes, I have been know to be a little dyslexic.

If anyone has any opinions on these protocols I'd like to know.
Attached Files
File Type: pdf ChIP_Protocol.pdf (108.2 KB, 73 views)
File Type: pdf Quick_ChIP.pdf (106.3 KB, 30 views)
ETHANol is offline   Reply With Quote
Old 03-10-2011, 11:38 AM   #19
Alex Clop
Member
 
Location: London

Join Date: Sep 2008
Posts: 15
Default

Thanks Ethan for sharing.

I will keep you posted with my own optimizations.

Cheers
Alex Clop is offline   Reply With Quote
Old 05-09-2011, 02:21 PM   #20
captainentropy
Member
 
Location: San Francisco Bay Area

Join Date: Mar 2009
Posts: 89
Default

Quote:
Originally Posted by ETHANol View Post
I'd be really interested in knowing the details of your chromatin prep protocol.
OK, the protocol is rather simple actually. Starting with ~10^8 cells I gently lyse them in 4% lysis buffer (4% SDS, 50 mM Tris, pH 8, 0.1 M NaCl, 1 mM EDTA, 0.5 mM EGTA) then carefully layer the lysate onto a cushion of 8M urea and spin the sample at 55,000 RPM for 7 hours or at 45,000 RPM overnight in a Beckman ultra-centrifuge. As I mentioned before, all the uncrosslinked DNA and proteins, and RNA remains behind in the urea. At the bottom of the tube will be a pellet that has the looks and consistency of a soft contact lens that is pure crosslinked material. The next step is to dialyze out any remaining urea. Usually I'll just wash it several times in 2 mL (the volume of the tubes I use) of PBS. You can also leave it stored in PBS until needed, this pellet will not dissolve or disaggregate. Once the pellet is cleaned we've found it works best to freeze it in liquid nitrogen and then either grind it up with a pellet pestle or pass it through an insulin syringe (this fragments the pellet into really tiny chunks that are more easily sonicated in the next step). Then I sonicate the sample in a Bioruptor or Covaris using optimized conditions for the type of chromatin being sonicated. At this point it's really no different than any other type of ChIP.

N.B. if you were to use this method you would need to optimize the crosslinking conditions first, which you should have done anyway - if you overcrosslink the cells you won't lyse anything and you'll just have a loose clump of cells after the spin. Also, these centrifuge tubes hold ~2.2 mL of liquid. In our protocol the 10^8 cells are lysed in 800 uL of lysis buffer of which 200 uL is layered onto four tubes each containing 1.8 mL of 8 M urea.
captainentropy is offline   Reply With Quote
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off




All times are GMT -8. The time now is 10:10 PM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2019, vBulletin Solutions, Inc.
Single Sign On provided by vBSSO