Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Why does quartile normalization inflate my FPKM values by ~4 orders of magnitude?

    Hello,

    When I run cufflinks with quartile normalization, the FPKM values it gives me are about 4 orders of magnitude higher than without quartile normalization.
    This makes absolutely no sense to me. Is anyone else having this problem?

    Also, there is a strange comment in the cufflinks manual:
    "If requested, Cufflinks/Cuffdiff will use the number of reads mapping to the upper-quartile locus in place of the "map mass" (M) when calculating FPKM."

    Shouldn't this be the number of reads NOT mapping to the upper quartile? My understanding is that bad behavior -- titrating out the bulk of the reads because of a few highly overrepresented sequences in one sample -- can be corrected for by IGNORING the upper quarttile.

    I'd love some answers.

    ~Rachel

  • #2
    Originally posted by Rachel Hillmer View Post
    Hello,

    Shouldn't this be the number of reads NOT mapping to the upper quartile? My understanding is that bad behavior -- titrating out the bulk of the reads because of a few highly overrepresented sequences in one sample -- can be corrected for by IGNORING the upper quarttile.

    ~Rachel
    Glad i'm not the only one who thinks this. I'm sure there is an explanation, but at the moment it does not seem intuitive to me.

    Comment


    • #3
      Wondering this as well.

      Comment


      • #4
        I am also confused by the explanation for upper quartile normalization provided by the Cufflinks page (i.e. adjusting for highly overexpressed genes), and would appreciate any insight on that, but the paper the authors reference (Bullard 2010 BMC Bioinformatics) makes more sense, I think.

        Basically, the upper quartile normalization gets rid of any long tail on the distribution of read counts which occurs due to the "preponderance of zero and low-count genes." So it seems, using this kind of normalization gets rid of any sequencing noise.

        It makes sense that an FPKM would be inflated with upper quartile normalization then, because you are basically dividing by a smaller denominator (upper quartile < total reads).

        Please let me know if this is a plausible reasoning, since I am new to this.

        Comment


        • #5
          jk1124,

          Your reasoning is not flawed, but (unless I'm missing something) the only way to increase FPKM by four orders of magnitude would be if the upper quartile read count constitutes only 1/10000 of the total read count. That seems unlikely.

          Also, the distribution tail of the data would not include zero-count genes.

          Comment


          • #6
            So would you recommend that we/i normalize my data by the upper quartile of the number of fragments mapping to individual loci when running cufflinks? or should one just omit this option?

            Comment


            • #7
              The Upper Quartile normalisation method does just essentially use the count value at the 75th percentile as the denominator.

              Also, for people thinking about normalization methods I would recommend this article:

              A comprehensive evaluation of normalization methods for Illumina high-throughput RNA sequencing data analysis. (2012) Brief Bioinform

              Comment

              Latest Articles

              Collapse

              • seqadmin
                Advancing Precision Medicine for Rare Diseases in Children
                by seqadmin




                Many organizations study rare diseases, but few have a mission as impactful as Rady Children’s Institute for Genomic Medicine (RCIGM). “We are all about changing outcomes for children,” explained Dr. Stephen Kingsmore, President and CEO of the group. The institute’s initial goal was to provide rapid diagnoses for critically ill children and shorten their diagnostic odyssey, a term used to describe the long and arduous process it takes patients to obtain an accurate...
                12-16-2024, 07:57 AM
              • seqadmin
                Recent Advances in Sequencing Technologies
                by seqadmin



                Innovations in next-generation sequencing technologies and techniques are driving more precise and comprehensive exploration of complex biological systems. Current advancements include improved accessibility for long-read sequencing and significant progress in single-cell and 3D genomics. This article explores some of the most impactful developments in the field over the past year.

                Long-Read Sequencing
                Long-read sequencing has seen remarkable advancements,...
                12-02-2024, 01:49 PM

              ad_right_rmr

              Collapse

              News

              Collapse

              Topics Statistics Last Post
              Started by seqadmin, 12-17-2024, 10:28 AM
              0 responses
              26 views
              0 likes
              Last Post seqadmin  
              Started by seqadmin, 12-13-2024, 08:24 AM
              0 responses
              42 views
              0 likes
              Last Post seqadmin  
              Started by seqadmin, 12-12-2024, 07:41 AM
              0 responses
              28 views
              0 likes
              Last Post seqadmin  
              Started by seqadmin, 12-11-2024, 07:45 AM
              0 responses
              42 views
              0 likes
              Last Post seqadmin  
              Working...
              X