SEQanswers

Go Back   SEQanswers > Bioinformatics > Bioinformatics



Similar Threads
Thread Thread Starter Forum Replies Last Post
TargetScan input NicoBxl Bioinformatics 3 05-13-2014 06:01 AM
Normalizing Single Cell libraries apratap Bioinformatics 0 04-07-2011 02:35 PM
normalizing small RNA libraries kr067 Sample Prep / Library Generation 2 08-05-2010 01:42 PM
Normalizing microRNA seq data Anelda Bioinformatics 0 08-02-2010 01:08 AM
normalizing RNA-seq data to "unique transcript length" instead of "transcript length" lmc Bioinformatics 2 06-23-2010 10:45 AM

Reply
 
Thread Tools
Old 01-10-2011, 09:39 AM   #1
biznatch
Senior Member
 
Location: Canada

Join Date: Nov 2010
Posts: 126
Default Normalizing to input

I want to make some .wig and/or .bed files for visualising in the UCSC Genome Browser, but first I want to normalise the samples to input. I'm using Perl scripts to do this (don't need help writing the scripts, just wondering about the methodology, this is my first set of chip-seq data...although maybe there are programs out there that can already do this for me?):

1. I have about 3 times as many reads for input (60million) compared to the experimental sample. Before subtracting input from experimental, should I divide the input coverage at each bp by 3 (or whatever the exact ratio is)? Is there another way to normalise for differences in number of reads between input and experimental?

2. Once this is done, should I just subtract input from experimental at each bp?
biznatch is offline   Reply With Quote
Old 01-10-2011, 11:32 PM   #2
simonandrews
Simon Andrews
 
Location: Babraham Inst, Cambridge, UK

Join Date: May 2009
Posts: 871
Default

Not wishing to evade your question - but are you sure you want to do that?

When we started out doing ChIP-Seq we used to normalise against input, but after looking at the results we found that in general we were causing more problems than we fixed. The reason was that over any given peak in our ChIP the coverage in the input was much poorer than that in the ChIP, so we were effectively reducing our accuracy of measurement to the poor coverage of in the input. In many cases we had only a very small number of reads in the input and the addition or loss of only a few reads would have a huge effect on the corrected value we would get.

What we did instead was to use the input as a filter to mask out regions where there were way more reads than we would expect. These regions normally contained mismapped reads and it was better to discard them than to try to correct against mismapped reads in the ChIP sample.

In your case you say you have 3x the coverage in the input so maybe you have enough data to do this correction reliably. Even so it might be worth looking at the general level of variability in your input samples and, excluding extreme outliers, compare this to the levels of enrichment you see in your ChIP. You can then get a good impression of whether the variability in the input levels is going to have a considerable impact on how you judge the strength of the enriched peaks.

The simplest correction is to work out the log transformed ratio of ChIP to input. You can also get the same effect by doing a log count of reads in each sample and then subtracting the input from the ChIP.

In terms of corrections, if you're using multiple ChIP samples then you want to correct the counts in those to account for the differing numbers of total reads in each sample (say by expressing the count as counts per million input reads). You can correct the inputs as well if you like, but given that you will use the same input for each ChIP it doesn't really matter if you do this or not since it will just move all of your results by a constant factor.
simonandrews is offline   Reply With Quote
Old 01-11-2011, 11:15 AM   #3
biznatch
Senior Member
 
Location: Canada

Join Date: Nov 2010
Posts: 126
Default

No I'm not sure, haha. Just figuring things out here. Coverage on this input data looks pretty good and consistent, except for some "peaks" where there's a peak in both the input and ChIP, and it's basically these that I want removed from the ChIP data as I suppose they're artefacts of mismapping or bias. I have other data with far fewer input reads so maybe doing a filter like you suggested would work better for that. Thanks for the reply, it's given me some ideas to try out.
biznatch is offline   Reply With Quote
Old 12-04-2011, 02:10 AM   #4
yaten2020
Junior Member
 
Location: INDIA

Join Date: Aug 2011
Posts: 7
Default

hi
I think, something like that has been done by Li Chen here. Though I could not understand it in and out. Any comments??

YK
yaten2020 is offline   Reply With Quote
Old 05-06-2013, 08:47 AM   #5
rebrendi
ng
 
Location: LA

Join Date: May 2008
Posts: 78
Default

Quote:
Originally Posted by simonandrews View Post
Not wishing to evade your question - but are you sure you want to do that?

When we started out doing ChIP-Seq we used to normalise against input, but after looking at the results we found that in general we were causing more problems than we fixed. The reason was that over any given peak in our ChIP the coverage in the input was much poorer than that in the ChIP, so we were effectively reducing our accuracy of measurement to the poor coverage of in the input. In many cases we had only a very small number of reads in the input and the addition or loss of only a few reads would have a huge effect on the corrected value we would get.

What we did instead was to use the input as a filter to mask out regions where there were way more reads than we would expect. These regions normally contained mismapped reads and it was better to discard them than to try to correct against mismapped reads in the ChIP sample.

In your case you say you have 3x the coverage in the input so maybe you have enough data to do this correction reliably. Even so it might be worth looking at the general level of variability in your input samples and, excluding extreme outliers, compare this to the levels of enrichment you see in your ChIP. You can then get a good impression of whether the variability in the input levels is going to have a considerable impact on how you judge the strength of the enriched peaks.

The simplest correction is to work out the log transformed ratio of ChIP to input. You can also get the same effect by doing a log count of reads in each sample and then subtracting the input from the ChIP.

In terms of corrections, if you're using multiple ChIP samples then you want to correct the counts in those to account for the differing numbers of total reads in each sample (say by expressing the count as counts per million input reads). You can correct the inputs as well if you like, but given that you will use the same input for each ChIP it doesn't really matter if you do this or not since it will just move all of your results by a constant factor.
Simon, I completely agree with the arguments, just want to make sure things did not change during these two years: is it still common NOT to normalize by input?
rebrendi is offline   Reply With Quote
Old 05-06-2013, 11:38 PM   #6
simonandrews
Simon Andrews
 
Location: Babraham Inst, Cambridge, UK

Join Date: May 2009
Posts: 871
Default

I don't pretend to speak for whole of the ChIP-Seq analysis field, but for our analyses we don't directly normalise to input. We use input samples if we do peak calling to use a local read density estimate to define enrichment, but this doesn't normally carry through into our quantitation. We will often use other normalisation techniques to normalise the global distribution of counts to remove effects introduced by differenential ChIP efficiency, but these are not position specific. We would still use the input as a filter to remove places showing large levels of enrichment if we were analysing data without using peaks called from an input.

This all assumes that we're using samples sequenced on the same platform with the same type of run, mapped with the same mapper with the same options. Under those conditions most of the artefacts you're looking at would be constant between samples so you're OK if you're comparing different sample groups. If you really want to compare peak strengths within a sample then you might want to look at input normalisation or filtering more carefully, but this is always going to be tricky.
simonandrews is offline   Reply With Quote
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off




All times are GMT -8. The time now is 10:38 AM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2019, vBulletin Solutions, Inc.
Single Sign On provided by vBSSO