Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Normalizing to input

    I want to make some .wig and/or .bed files for visualising in the UCSC Genome Browser, but first I want to normalise the samples to input. I'm using Perl scripts to do this (don't need help writing the scripts, just wondering about the methodology, this is my first set of chip-seq data...although maybe there are programs out there that can already do this for me?):

    1. I have about 3 times as many reads for input (60million) compared to the experimental sample. Before subtracting input from experimental, should I divide the input coverage at each bp by 3 (or whatever the exact ratio is)? Is there another way to normalise for differences in number of reads between input and experimental?

    2. Once this is done, should I just subtract input from experimental at each bp?

  • #2
    Not wishing to evade your question - but are you sure you want to do that?

    When we started out doing ChIP-Seq we used to normalise against input, but after looking at the results we found that in general we were causing more problems than we fixed. The reason was that over any given peak in our ChIP the coverage in the input was much poorer than that in the ChIP, so we were effectively reducing our accuracy of measurement to the poor coverage of in the input. In many cases we had only a very small number of reads in the input and the addition or loss of only a few reads would have a huge effect on the corrected value we would get.

    What we did instead was to use the input as a filter to mask out regions where there were way more reads than we would expect. These regions normally contained mismapped reads and it was better to discard them than to try to correct against mismapped reads in the ChIP sample.

    In your case you say you have 3x the coverage in the input so maybe you have enough data to do this correction reliably. Even so it might be worth looking at the general level of variability in your input samples and, excluding extreme outliers, compare this to the levels of enrichment you see in your ChIP. You can then get a good impression of whether the variability in the input levels is going to have a considerable impact on how you judge the strength of the enriched peaks.

    The simplest correction is to work out the log transformed ratio of ChIP to input. You can also get the same effect by doing a log count of reads in each sample and then subtracting the input from the ChIP.

    In terms of corrections, if you're using multiple ChIP samples then you want to correct the counts in those to account for the differing numbers of total reads in each sample (say by expressing the count as counts per million input reads). You can correct the inputs as well if you like, but given that you will use the same input for each ChIP it doesn't really matter if you do this or not since it will just move all of your results by a constant factor.

    Comment


    • #3
      No I'm not sure, haha. Just figuring things out here. Coverage on this input data looks pretty good and consistent, except for some "peaks" where there's a peak in both the input and ChIP, and it's basically these that I want removed from the ChIP data as I suppose they're artefacts of mismapping or bias. I have other data with far fewer input reads so maybe doing a filter like you suggested would work better for that. Thanks for the reply, it's given me some ideas to try out.

      Comment


      • #4
        hi
        I think, something like that has been done by Li Chen here. Though I could not understand it in and out. Any comments??

        YK

        Comment


        • #5
          Originally posted by simonandrews View Post
          Not wishing to evade your question - but are you sure you want to do that?

          When we started out doing ChIP-Seq we used to normalise against input, but after looking at the results we found that in general we were causing more problems than we fixed. The reason was that over any given peak in our ChIP the coverage in the input was much poorer than that in the ChIP, so we were effectively reducing our accuracy of measurement to the poor coverage of in the input. In many cases we had only a very small number of reads in the input and the addition or loss of only a few reads would have a huge effect on the corrected value we would get.

          What we did instead was to use the input as a filter to mask out regions where there were way more reads than we would expect. These regions normally contained mismapped reads and it was better to discard them than to try to correct against mismapped reads in the ChIP sample.

          In your case you say you have 3x the coverage in the input so maybe you have enough data to do this correction reliably. Even so it might be worth looking at the general level of variability in your input samples and, excluding extreme outliers, compare this to the levels of enrichment you see in your ChIP. You can then get a good impression of whether the variability in the input levels is going to have a considerable impact on how you judge the strength of the enriched peaks.

          The simplest correction is to work out the log transformed ratio of ChIP to input. You can also get the same effect by doing a log count of reads in each sample and then subtracting the input from the ChIP.

          In terms of corrections, if you're using multiple ChIP samples then you want to correct the counts in those to account for the differing numbers of total reads in each sample (say by expressing the count as counts per million input reads). You can correct the inputs as well if you like, but given that you will use the same input for each ChIP it doesn't really matter if you do this or not since it will just move all of your results by a constant factor.
          Simon, I completely agree with the arguments, just want to make sure things did not change during these two years: is it still common NOT to normalize by input?

          Comment


          • #6
            I don't pretend to speak for whole of the ChIP-Seq analysis field, but for our analyses we don't directly normalise to input. We use input samples if we do peak calling to use a local read density estimate to define enrichment, but this doesn't normally carry through into our quantitation. We will often use other normalisation techniques to normalise the global distribution of counts to remove effects introduced by differenential ChIP efficiency, but these are not position specific. We would still use the input as a filter to remove places showing large levels of enrichment if we were analysing data without using peaks called from an input.

            This all assumes that we're using samples sequenced on the same platform with the same type of run, mapped with the same mapper with the same options. Under those conditions most of the artefacts you're looking at would be constant between samples so you're OK if you're comparing different sample groups. If you really want to compare peak strengths within a sample then you might want to look at input normalisation or filtering more carefully, but this is always going to be tricky.

            Comment

            Latest Articles

            Collapse

            • seqadmin
              Advancing Precision Medicine for Rare Diseases in Children
              by seqadmin




              Many organizations study rare diseases, but few have a mission as impactful as Rady Children’s Institute for Genomic Medicine (RCIGM). “We are all about changing outcomes for children,” explained Dr. Stephen Kingsmore, President and CEO of the group. The institute’s initial goal was to provide rapid diagnoses for critically ill children and shorten their diagnostic odyssey, a term used to describe the long and arduous process it takes patients to obtain an accurate...
              12-16-2024, 07:57 AM
            • seqadmin
              Recent Advances in Sequencing Technologies
              by seqadmin



              Innovations in next-generation sequencing technologies and techniques are driving more precise and comprehensive exploration of complex biological systems. Current advancements include improved accessibility for long-read sequencing and significant progress in single-cell and 3D genomics. This article explores some of the most impactful developments in the field over the past year.

              Long-Read Sequencing
              Long-read sequencing has seen remarkable advancements,...
              12-02-2024, 01:49 PM

            ad_right_rmr

            Collapse

            News

            Collapse

            Topics Statistics Last Post
            Started by seqadmin, 12-17-2024, 10:28 AM
            0 responses
            39 views
            0 likes
            Last Post seqadmin  
            Started by seqadmin, 12-13-2024, 08:24 AM
            0 responses
            52 views
            0 likes
            Last Post seqadmin  
            Started by seqadmin, 12-12-2024, 07:41 AM
            0 responses
            38 views
            0 likes
            Last Post seqadmin  
            Started by seqadmin, 12-11-2024, 07:45 AM
            0 responses
            46 views
            0 likes
            Last Post seqadmin  
            Working...
            X