Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Normalizing to input

    I want to make some .wig and/or .bed files for visualising in the UCSC Genome Browser, but first I want to normalise the samples to input. I'm using Perl scripts to do this (don't need help writing the scripts, just wondering about the methodology, this is my first set of chip-seq data...although maybe there are programs out there that can already do this for me?):

    1. I have about 3 times as many reads for input (60million) compared to the experimental sample. Before subtracting input from experimental, should I divide the input coverage at each bp by 3 (or whatever the exact ratio is)? Is there another way to normalise for differences in number of reads between input and experimental?

    2. Once this is done, should I just subtract input from experimental at each bp?

  • #2
    Not wishing to evade your question - but are you sure you want to do that?

    When we started out doing ChIP-Seq we used to normalise against input, but after looking at the results we found that in general we were causing more problems than we fixed. The reason was that over any given peak in our ChIP the coverage in the input was much poorer than that in the ChIP, so we were effectively reducing our accuracy of measurement to the poor coverage of in the input. In many cases we had only a very small number of reads in the input and the addition or loss of only a few reads would have a huge effect on the corrected value we would get.

    What we did instead was to use the input as a filter to mask out regions where there were way more reads than we would expect. These regions normally contained mismapped reads and it was better to discard them than to try to correct against mismapped reads in the ChIP sample.

    In your case you say you have 3x the coverage in the input so maybe you have enough data to do this correction reliably. Even so it might be worth looking at the general level of variability in your input samples and, excluding extreme outliers, compare this to the levels of enrichment you see in your ChIP. You can then get a good impression of whether the variability in the input levels is going to have a considerable impact on how you judge the strength of the enriched peaks.

    The simplest correction is to work out the log transformed ratio of ChIP to input. You can also get the same effect by doing a log count of reads in each sample and then subtracting the input from the ChIP.

    In terms of corrections, if you're using multiple ChIP samples then you want to correct the counts in those to account for the differing numbers of total reads in each sample (say by expressing the count as counts per million input reads). You can correct the inputs as well if you like, but given that you will use the same input for each ChIP it doesn't really matter if you do this or not since it will just move all of your results by a constant factor.

    Comment


    • #3
      No I'm not sure, haha. Just figuring things out here. Coverage on this input data looks pretty good and consistent, except for some "peaks" where there's a peak in both the input and ChIP, and it's basically these that I want removed from the ChIP data as I suppose they're artefacts of mismapping or bias. I have other data with far fewer input reads so maybe doing a filter like you suggested would work better for that. Thanks for the reply, it's given me some ideas to try out.

      Comment


      • #4
        hi
        I think, something like that has been done by Li Chen here. Though I could not understand it in and out. Any comments??

        YK

        Comment


        • #5
          Originally posted by simonandrews View Post
          Not wishing to evade your question - but are you sure you want to do that?

          When we started out doing ChIP-Seq we used to normalise against input, but after looking at the results we found that in general we were causing more problems than we fixed. The reason was that over any given peak in our ChIP the coverage in the input was much poorer than that in the ChIP, so we were effectively reducing our accuracy of measurement to the poor coverage of in the input. In many cases we had only a very small number of reads in the input and the addition or loss of only a few reads would have a huge effect on the corrected value we would get.

          What we did instead was to use the input as a filter to mask out regions where there were way more reads than we would expect. These regions normally contained mismapped reads and it was better to discard them than to try to correct against mismapped reads in the ChIP sample.

          In your case you say you have 3x the coverage in the input so maybe you have enough data to do this correction reliably. Even so it might be worth looking at the general level of variability in your input samples and, excluding extreme outliers, compare this to the levels of enrichment you see in your ChIP. You can then get a good impression of whether the variability in the input levels is going to have a considerable impact on how you judge the strength of the enriched peaks.

          The simplest correction is to work out the log transformed ratio of ChIP to input. You can also get the same effect by doing a log count of reads in each sample and then subtracting the input from the ChIP.

          In terms of corrections, if you're using multiple ChIP samples then you want to correct the counts in those to account for the differing numbers of total reads in each sample (say by expressing the count as counts per million input reads). You can correct the inputs as well if you like, but given that you will use the same input for each ChIP it doesn't really matter if you do this or not since it will just move all of your results by a constant factor.
          Simon, I completely agree with the arguments, just want to make sure things did not change during these two years: is it still common NOT to normalize by input?

          Comment


          • #6
            I don't pretend to speak for whole of the ChIP-Seq analysis field, but for our analyses we don't directly normalise to input. We use input samples if we do peak calling to use a local read density estimate to define enrichment, but this doesn't normally carry through into our quantitation. We will often use other normalisation techniques to normalise the global distribution of counts to remove effects introduced by differenential ChIP efficiency, but these are not position specific. We would still use the input as a filter to remove places showing large levels of enrichment if we were analysing data without using peaks called from an input.

            This all assumes that we're using samples sequenced on the same platform with the same type of run, mapped with the same mapper with the same options. Under those conditions most of the artefacts you're looking at would be constant between samples so you're OK if you're comparing different sample groups. If you really want to compare peak strengths within a sample then you might want to look at input normalisation or filtering more carefully, but this is always going to be tricky.

            Comment

            Latest Articles

            Collapse

            • seqadmin
              Recent Developments in Metagenomics
              by seqadmin





              Metagenomics has improved the way researchers study microorganisms across diverse environments. Historically, studying microorganisms relied on culturing them in the lab, a method that limits the investigation of many species since most are unculturable1. Metagenomics overcomes these issues by allowing the study of microorganisms regardless of their ability to be cultured or the environments they inhabit. Over time, the field has evolved, especially with the advent...
              09-23-2024, 06:35 AM
            • seqadmin
              Understanding Genetic Influence on Infectious Disease
              by seqadmin




              During the COVID-19 pandemic, scientists observed that while some individuals experienced severe illness when infected with SARS-CoV-2, others were barely affected. These disparities left researchers and clinicians wondering what causes the wide variations in response to viral infections and what role genetics plays.

              Jean-Laurent Casanova, M.D., Ph.D., Professor at Rockefeller University, is a leading expert in this crossover between genetics and infectious...
              09-09-2024, 10:59 AM

            ad_right_rmr

            Collapse

            News

            Collapse

            Topics Statistics Last Post
            Started by seqadmin, 09-26-2024, 12:57 PM
            0 responses
            11 views
            0 likes
            Last Post seqadmin  
            Started by seqadmin, 09-25-2024, 05:35 AM
            0 responses
            18 views
            0 likes
            Last Post seqadmin  
            Started by seqadmin, 09-20-2024, 06:25 AM
            0 responses
            53 views
            0 likes
            Last Post seqadmin  
            Started by seqadmin, 09-19-2024, 01:02 PM
            0 responses
            45 views
            0 likes
            Last Post seqadmin  
            Working...
            X