Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Pearson correlation in R

    Guys,
    I tried to compare two vectors and each vector is a RPKM value I calculated using ChIP-seq reads. I want to use pearson correlation as a way to access the similarity between two ChIP-seq datasets.

    The two RPKM values are values in 500bp bins genome-wide, for mouse. So there are more than 5 million bins in total. I checked the difference between two RPKM value files and they are actually very similar. So presumably the pearson correlation coefficient should be very close to 1.

    However, when I used cor.test() in R, the output is like this:
    Pearson's product-moment correlation

    data: a$V4 and b$V4
    t = -0.0037, df = 5309833, p-value = 0.9971
    alternative hypothesis: true correlation is not equal to 0
    95 percent confidence interval:
    -0.0008521632 0.0008489671
    sample estimates:
    cor
    -1.598034e-06
    I couldn't figure out why I got a weird correlation like that.. does anyone have an idea of why it's the case? Did I do something completely wrong?

    Thanks a lot!

  • #2
    can you provide a scatterplot a$V4 vs b$V4 (probably best to subsample)?

    just 2 additional cents: pearson correlations do not provide much information on the similarity of ChIPSeq samples given that most of the data points reflect background reads. in other words you get the correlation coefficient of the background noise.

    Comment


    • #3
      Simply do cor() instead of cor.test() (which performs a statistical test instead of reporting the correlation). You can do cor(x, method="spearman") for a Spearman rank correlation.

      However, the I agree on mudshark's caveat; you should actually not trust the results because most of the correlation will come from zero counts or low non-specific (noise) alignments.

      Comment


      • #4
        Originally posted by mudshark View Post
        can you provide a scatterplot a$V4 vs b$V4 (probably best to subsample)?

        just 2 additional cents: pearson correlations do not provide much information on the similarity of ChIPSeq samples given that most of the data points reflect background reads. in other words you get the correlation coefficient of the background noise.
        Thanks for your suggestion. I've plotted a scatterplot and it doesn't look right. Only 3 dots on the plot.. with the help from people i the lab, I found a bug in my code to calculate RPKM... now that problem is fixed.

        Regarding the usefulness of pearson correlations on ChIP-Seq data, I agree that the majority of the regions will have a PRKM value of 0.. do you know how much this affects interpretation of correlation? What would you do to compare two ChIP-Seq? Do you call peaks and calculate the overlap of peaks?

        My purpose of doing this is to access how saturated my ChIP-seq data is.. so I split my ChIP-Seq data for ONE sample into 2 subsets and then random sample 10%, 20%, 40%, 60% etc of reads from each subset and I use pearson correlation between RPKM values (two vectors) of 2 subsets as a way to access saturation.. I felt pearson correlation would do the job for this purpose. What do you think?

        Thanks so much!
        Last edited by gene_x; 02-20-2013, 01:46 PM.

        Comment


        • #5
          Originally posted by kopi-o View Post
          Simply do cor() instead of cor.test() (which performs a statistical test instead of reporting the correlation). You can do cor(x, method="spearman") for a Spearman rank correlation.

          However, the I agree on mudshark's caveat; you should actually not trust the results because most of the correlation will come from zero counts or low non-specific (noise) alignments.
          cor() and cor.test() gave the same correlation coefficient actually.

          My problem is fixed now. If you are interested.. please take a look at my reply to mudshark's post and let me know what your thoughts are.

          Thanks a lot!

          Comment


          • #6
            I would recommend you to look at some tools for ChIP-seq data comparison such as MANorm (http://bcb.dfci.harvard.edu/~gcyuan/MAnorm/MAnorm.htm) or perhaps SeqMiner (http://bips.u-strasbg.fr/seqminer/tiki-index.php) or CHANCE (http://www.ncbi.nlm.nih.gov/pubmed/23068444).

            The thing is that ChIP-seq data actually is mostly noise (to simplify a bit, a small fraction of the reads are in peaks) so one needs to be careful to really look at the informative part of the data.

            Comment


            • #7
              Originally posted by kopi-o View Post
              I would recommend you to look at some tools for ChIP-seq data comparison such as MANorm (http://bcb.dfci.harvard.edu/~gcyuan/MAnorm/MAnorm.htm) or perhaps SeqMiner (http://bips.u-strasbg.fr/seqminer/tiki-index.php) or CHANCE (http://www.ncbi.nlm.nih.gov/pubmed/23068444).

              The thing is that ChIP-seq data actually is mostly noise (to simplify a bit, a small fraction of the reads are in peaks) so one needs to be careful to really look at the informative part of the data.
              Thanks for the links, I'll take a look at them!

              Comment


              • #8
                Another tool to try is the Bioconductor package DiffBind:


                It gives you a number of ways to compare ChIP-seq samples using a variety of scores and normalisations. It will do exactly these types of correlations and show how samples cluster. The vignette walks through an example in some detail.

                It does rely on some sort of intervals of interest being identified beforehand. You can use traditional peak calling, or a windowing scheme. To compare replicates we often use all the promoters (windows 1000bp upstream from each TSS; while this isn;t always appropriate, it usually gives a good idea of how similar the signal in different samples are.

                Comment


                • #9
                  Originally posted by rory View Post
                  Another tool to try is the Bioconductor package DiffBind:


                  It gives you a number of ways to compare ChIP-seq samples using a variety of scores and normalisations. It will do exactly these types of correlations and show how samples cluster. The vignette walks through an example in some detail.

                  It does rely on some sort of intervals of interest being identified beforehand. You can use traditional peak calling, or a windowing scheme. To compare replicates we often use all the promoters (windows 1000bp upstream from each TSS; while this isn;t always appropriate, it usually gives a good idea of how similar the signal in different samples are.
                  OK. Thanks for the info.

                  Comment


                  • #10
                    Originally posted by rory View Post
                    Another tool to try is the Bioconductor package DiffBind:


                    It gives you a number of ways to compare ChIP-seq samples using a variety of scores and normalisations. It will do exactly these types of correlations and show how samples cluster. The vignette walks through an example in some detail.

                    It does rely on some sort of intervals of interest being identified beforehand. You can use traditional peak calling, or a windowing scheme. To compare replicates we often use all the promoters (windows 1000bp upstream from each TSS; while this isn;t always appropriate, it usually gives a good idea of how similar the signal in different samples are.
                    Hi Rory,

                    I googled this thread because I want to you do exactly you suggested using DiffBind. However, a "score" column is needed to create a dba object at the beginning. Obviously, the artificial bed files with all promoters cannot have the score column. Please let me know how you import promoter regions in DiffBind.

                    Many thanks.
                    Xin
                    Mount Sinai School of Medicine

                    BTW, I tried to use "." in the score column and got the following error message:
                    Error in Summary.factor(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, : ‘max’ not meaningful for factors

                    Comment


                    • #11
                      Try:

                      > samples<-dba(sampleSheet="samples.csv", scoreCol=0)

                      This tells DiffBind that there are no scores. Note that everything will get an identical score of 1, so clustering/PCA plots won't be very interesting until you get read-based scores using dba.count().

                      Cheers-

                      Rory

                      Comment

                      Latest Articles

                      Collapse

                      • seqadmin
                        Current Approaches to Protein Sequencing
                        by seqadmin


                        Proteins are often described as the workhorses of the cell, and identifying their sequences is key to understanding their role in biological processes and disease. Currently, the most common technique used to determine protein sequences is mass spectrometry. While still a valuable tool, mass spectrometry faces several limitations and requires a highly experienced scientist familiar with the equipment to operate it. Additionally, other proteomic methods, like affinity assays, are constrained...
                        04-04-2024, 04:25 PM
                      • seqadmin
                        Strategies for Sequencing Challenging Samples
                        by seqadmin


                        Despite advancements in sequencing platforms and related sample preparation technologies, certain sample types continue to present significant challenges that can compromise sequencing results. Pedro Echave, Senior Manager of the Global Business Segment at Revvity, explained that the success of a sequencing experiment ultimately depends on the amount and integrity of the nucleic acid template (RNA or DNA) obtained from a sample. “The better the quality of the nucleic acid isolated...
                        03-22-2024, 06:39 AM

                      ad_right_rmr

                      Collapse

                      News

                      Collapse

                      Topics Statistics Last Post
                      Started by seqadmin, 04-11-2024, 12:08 PM
                      0 responses
                      30 views
                      0 likes
                      Last Post seqadmin  
                      Started by seqadmin, 04-10-2024, 10:19 PM
                      0 responses
                      32 views
                      0 likes
                      Last Post seqadmin  
                      Started by seqadmin, 04-10-2024, 09:21 AM
                      0 responses
                      28 views
                      0 likes
                      Last Post seqadmin  
                      Started by seqadmin, 04-04-2024, 09:00 AM
                      0 responses
                      52 views
                      0 likes
                      Last Post seqadmin  
                      Working...
                      X