Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #46
    Does your PCR reaction efficiency change with a more dilute sample?

    I've always looked at it as being relative to the standard. As long as the slope is always the same and the standards come off at relatively the same Ct values every time. Does it matter if the reaction is performing at say 50% efficiency if it amplifies at the 1E6 standard?

    Same is true regarding the qubit. Picogreen works relatively well but the you really have to watch the standards don't deviate.

    Comment


    • #47
      I too agree with all, that Nanodrop is not the reliable method of quantitation. We too rely on Picogreen and qPCR.We only use qPCR to know that we have well constructed libraries to start with and just to know the trend of concentration in comparison to the Picogreen concentration.
      And yes, we do get concentration from qPCR which vary 2-5x times the picogreen concentration.We still stick with picogreen concentration adjusting a little with qPCR concentration. I am not sure if Im clear here....for eg. if my picogreen conc. is 2nM and qPCR conc. is 2.8-3.0nM,then I assume my library conc. to be somewhere @ 2.5nM,and then I use accordingly for clustering.
      As to why,qPCR overestimates your libraries,I believe its because each sample have there own amplification efficiency,which cannot be absolutely correlated with the standards being used.
      And regarding dilution factor,yes that also imposes a variation in quantification.I have been using PhiX and Kapa standards for quantification.PhiX at even high accuracy only shows 85-90% Efficiency whereas Kapa standards give you more than 95% Efficiency.
      I too have diluted my libraries @1:50,000 or even upto 1:100,000,which have given a consistent conc. resulting into desired cluster numbers.Hope this helps

      Comment


      • #48
        Originally posted by DNA_Dan View Post
        Does your PCR reaction efficiency change with a more dilute sample?
        The efficiency should not change for more diluted samples, no. If the efficiency were not constant, this would easily be detected by a non-linear graph of CQ vs log(SQ).

        I use the serial dilutions to calculate the efficiency of my samples. After all, each 2x dilution should result in a Ct score of exactly 1 higher if the reaction is at 100% efficiency (since the amount of DNA is doubled in each cycle). If it is slower than that, this can be used to calculate the efficiency of that sample.

        Originally posted by DNA_Dan View Post
        I've always looked at it as being relative to the standard. As long as the slope is always the same and the standards come off at relatively the same Ct values every time. Does it matter if the reaction is performing at say 50% efficiency if it amplifies at the 1E6 standard?
        It's not directly the efficiency of the standard series that matters, it's the difference between the efficiency of the standard series and your sample. So no, it doesn't matter if the standard series is performing at 50% efficiency, as long as the samples come with the exact same efficiency.

        That's the whole point of the qPCR software --> The assumption is made that the slope/efficiency of the standard series is the same as that of your sample. I find that this is not usually the case, which might explain your previous statement that the SQ values you obtain for your serial dilutions don't always match up.

        I've actually worked this out in reasonable detail for my internship project. I find that my results have become much more consistent since I implemented this new method of calculation. If anyone has any questions regarding this, feel free to send a PM or state it here.
        Last edited by DaanV; 06-18-2013, 04:58 AM. Reason: Extra clarification

        Comment


        • #49
          Originally posted by DaanV View Post
          I've actually worked this out in reasonable detail for my internship project. I find that my results have become much more consistent since I implemented this new method of calculation. If anyone has any questions regarding this, feel free to send a PM or state it here.
          Sounds useful. Yes, please post it, please.

          --
          Phillip

          Comment


          • #50
            I'm not that qPCR saavy but I get what you're saying. I definitely would like to take a look at what you have. Anything that makes this more consistent is a win for everyone on the forums.

            Comment


            • #51
              I have noticed qPCR results that seemed to suggest that the slope of the standards was not the same as the slope of a given library. Since one of the axes of this graph is dilution, maybe reaction efficiency the key.

              I am definitely not a qPCR guy either. But I get the feeling this is something that would be blindingly obvious to a qPCR guy.

              --
              Phillip
              Last edited by pmiguel; 06-18-2013, 09:56 AM.

              Comment


              • #52
                Right.. Hold on tight then, as this may take quite a bit of explaining, please bear with me. If you care only about the results, feel free to skip to the bottom of this post (down to where it says "Summary" in big blocky letters).

                So let's start with a description of a general PCR reaction:

                Q = SQ * (E+1)^C

                With:
                Q = the DNA quantity in the sample after C cycles
                SQ = starting quantity (equal to Q at C=0)
                C = number of cycles of PCR
                E = PCR efficiency of that library (depending on GC content, fragment length distribution, and possibly other variables)


                In the case of perfect replication (E=1, so that Q = SQ * 2^C), the amount of DNA is exactly doubled after every cycle of replication.

                Now, as you may know for qPCR: The Cq values are the values that correspond to the number of cycles after which a certain level of fluorescence is measured (which stands in direct relation to the amount of double stranded DNA). So, when this is the case for any sample, we can re-write the above equation as:

                Qq = SQ * (E+1)^Cq

                With:
                Qq = quantification quantity, or the pre-determined level of fluorescence that a sample must reach.
                Cq = quantification cycle, or the cycle at which Qq is reached.


                It should be noted that Qq is the same for every sample (Qq1=Qq2).

                Now, let's take 1 dilution series of a sample with a known molarity (the standard). The efficiency of your standard is given by the slope of the graph of Cq vs log(SQ), and can be calculated as:

                E = 10^(-1/m) - 1

                With:
                m = the slope of Cq vs log(SQ)


                (Normally the efficiency of your standard is given by the software you use, you can use this to check if you're calculating it correctly. Mathematical deduction of this relation can be found further down.)

                So yeah Phillip, you're right in assuming that the inequality of slopes of standards and libraries is caused directly by efficiency.

                Additionally, the y-intersection of the graph of Cq vs log(SQ) of your standard series (the value of Cq when log(SQ) = 0) will be called UC for Unit Cycle for now (when log(SQ)=0, SQ=1, hence the name).

                Now let's evaluate a library with unknown SQ. We'll call the standard sample 1, and the library sample 2.

                Qq1 = SQ1 * (E1 + 1)^Cq1
                Qq2 = SQ2 * (E2 + 1)^Cq2

                (Qq1/Qq2) = (SQ1/SQ2) * ((E1 + 1)^Cq1 / (E2 + 1)^Cq2)

                Now let's substitute SQ1 with 1, so that Cq1 = UC
                Also note that Qq1=Qq2

                1 = (1/SQ2) * ((E1 + 1)^UC / (E2 + 1)^Cq2)

                Bringing SQ2 to the left hand side results in:

                SQ2 = ((E1 + 1)^UC / (E2 + 1)^Cq2)

                This then gives us an accurate relation to equate SQ2 with. Note that E1 and UC are given by the standard series, while E2 is given by the library. Calculating SQ2 for the 6 different values of Cq2 (resulting from 3 dilutions in duplo) and taking into account the different dilutions should result in 6 near-equal values for SQ2. This is the equation I use.

                Relation to software
                This part will detail the difference between my method and the method commonly applied in qPCR software.

                I have found that the software uses a simplifying assumption. The assumption is made that the slope of the standard series is a good approximation of the slopes of all libraries. In other words, they assume that the standards and libraries run with near-equal efficiencies.

                Using that assumption, the above relation can be re-written as follows:

                SQ2 = (E1 + 1)^(UC - Cq2)

                Since a^b/a^c = a^(b-c)

                Using this equation, I get the exact values for SQ as the software does. It should be clear to see that this assumption goes awry when E1 does not equal E2. As I demonstrated in my first post, even seemingly minor differences can lead to huge differences due to the exponential nature of the process.

                Additional information
                This is really a part that you don't need to read in order to understand the above. I just thought I'd share it in case anyone is interested.

                The above relation can be re-written generally as:

                (E + 1)_LOG(SQ) = UC - Cq

                With:
                (E + 1)_LOG(SQ) = the logarithm of SQ with base (E + 1)

                Since:
                a = b^c --> b_LOG(a) = c


                Leading to:

                Cq = - LOG(SQ)/LOG(E + 1) + UC

                Since:
                b_LOG(a) = LOG(a)/LOG(b)


                If we define:

                m = - 1 / LOG(E + 1)

                We can clearly see that this is a constant (assuming that E is constant).
                This also gives us a relation to equate E with when we have the slope:

                E = 10^(-1/m) - 1

                As noted earlier.

                Substituting m into the previous equation leads to:

                Cq = m * LOG(SQ) + UC

                Which makes it immediately obvious that the graph as plotted by Cq versus LOG(SQ) is linear, with slope m and y-intersect UC.

                SUMMARY
                So here's a basic step-by-step of what I do:

                1) Run each qPCR plate with a standard dilution series, and run libraries with dilutions 1,000x, 16,000x and 256,000x in duplo.

                2) Calculate the slope (m) and y-intersect (UC) of Cq vs LOG(SQ) of the standard dilution series. The Excel LINEST function is very useful here.

                3) Calculate the slope m of the Cq vs -LOG(dilution) of the libraries in a similar manner (note that -LOG(dil) is equivalent to LOG(SQ)).

                4) Calculate E for the standard and all libraries as: E = 10^(-1/m) - 1

                5) Calculate SQ for all libraries as: SQ = (Es + 1)^UC / (El + 1)^Cq
                For all dilutions of the same library. Multiply SQ by the dilution factor to obtain the molarity of your sample. Average over all 6 values.
                With Es as E of the standard and El as E of the library

                Optionally:
                6) Calculate the relative standard deviation as a check to see how 'reliable' your values are.

                Just for fun:
                7) Also calculate the relative standard deviation of SQ over the various dilutions as calculated by the software and note the differences.


                -----------

                Ok, so this has potentially become a bit long winded. I just thought I'd give all the information in case anyone was interested.

                I hope that the idea has come across though. Please don't be afraid to ask any questions.

                Comment


                • #53
                  Hi DaanV,
                  Thanks for the detailed explanation. I am still studying it. But one question does spring to mind: does determining efficiency require a dilution series? Is it not possible to measure the efficiency directly by measuring the increase in fluorescence each cycle of a single reaction? That is, if the signal is exactly doubling each cycle, then the efficiency is 100%.
                  Again, I am not a qPCR guy, so the above may be naive.

                  --
                  Phillip

                  Comment


                  • #54
                    Hey Phillip,
                    You're welcome. It's my pleasure to finally be able to contribute something to SEQanswers.

                    Your question is valid, and it is indeed one I did pursue at some stage during my internship. In theory you are of course entirely right. And indeed, with more effort it may even be possible to do it (though I've not put in the dedication to see how robust the method is).

                    In essence, the problem you run into is that the figures aren't logarithmic over the full range of the process. At the early stages, I think this is caused by the lower detection limit of the camera, and this is seen as the values for RFU (Relative Fluorescence Units) fluctuating around 0 (+/-30 or so) for the first bunch of cycles (few cycles for high concentration samples, more for dilutions).

                    Then at the end of the process the curve flattens again. I suppose this is caused by the reaction running out of nucleotides/primers. Worth a test perhaps, seeing if adding more of either of them increases the maximum value obtained.

                    These two effects combined result in the characteristic "S" shaped curves that you find with qPCR. Only the truly logarithmic part in between (which typically only lasts for 6-8 cycles) can be used to calculate the efficiency with. The efficiency you find then depends on exactly which cycles you decide to include or exclude from this 'logarithmic phase', which to my tastes becomes a bit too arbitrary and prone to user bias variation.

                    I hope this clarifies. Of course you're free to pursue the idea, as I'd love to be proven wrong. A quick test on some of my own data indicates that the acquired score for E is at least in the range where I expect it to be.

                    Comment


                    • #55
                      Originally posted by DaanV View Post
                      Hey Phillip,
                      You're welcome. It's my pleasure to finally be able to contribute something to SEQanswers.

                      Your question is valid, and it is indeed one I did pursue at some stage during my internship. In theory you are of course entirely right. And indeed, with more effort it may even be possible to do it (though I've not put in the dedication to see how robust the method is).

                      In essence, the problem you run into is that the figures aren't logarithmic over the full range of the process. At the early stages, I think this is caused by the lower detection limit of the camera, and this is seen as the values for RFU (Relative Fluorescence Units) fluctuating around 0 (+/-30 or so) for the first bunch of cycles (few cycles for high concentration samples, more for dilutions).

                      Then at the end of the process the curve flattens again. I suppose this is caused by the reaction running out of nucleotides/primers. Worth a test perhaps, seeing if adding more of either of them increases the maximum value obtained.

                      These two effects combined result in the characteristic "S" shaped curves that you find with qPCR. Only the truly logarithmic part in between (which typically only lasts for 6-8 cycles) can be used to calculate the efficiency with. The efficiency you find then depends on exactly which cycles you decide to include or exclude from this 'logarithmic phase', which to my tastes becomes a bit too arbitrary and prone to user bias variation.

                      I hope this clarifies. Of course you're free to pursue the idea, as I'd love to be proven wrong. A quick test on some of my own data indicates that the acquired score for E is at least in the range where I expect it to be.
                      Yes, this "S" shaped curve is very familiar to me from a variety of processes. We were even given names for parts of the curve: the initial flat part is called the "lag phase", then the middle log linear part is call the "log phase" and the final flat part, well for bacterial growth anyway is called "stationary phase".

                      We have a Lifetech (also know as "Applied Biosystems" and "Invitrogen") Step One qPCR machine. It seems to search for an early part of the "log phase" via some algorithm and calls this the "Ct" for "cycle threshold". This may just be another name for one of the parameters you describe above. Anyway, to the extent this is a reasonable prediction of the beginning of "log phase", the efficiency of the reaction calculation maybe correct.

                      The issue here is just the obvious one -- needing to triple the number of qPCR reactions would likely lead to a substantial increase in our costs. Especially as this instrument has recently turned into quite a bottleneck at times.

                      Actually we have some aberrant clustering results -- specifically intra-pool -- that we could examine to see if the efficiency metric predicted issues we see.

                      Again, thanks for your insights. This has been an issue for us for years now. Hopefully this will get us nearer to managing it.

                      --
                      Phillip

                      Comment


                      • #56
                        Yes, I'm familiar with the nomenclature of the S-shaped growth curves of microbes. Wasn't sure if I could apply the same names to these though. Log-linear phase seems like as decent a term as any.

                        Personally I use Bio-Rad CFX Touch and CFX manager. Judging by http://find.lifetechnologies.com/Glo...Update_FLR.pdf this link from Lifetechnologies, the Ct score you mention is the same as (or at least closely related to) Cq I have described above. It is basically the number of cycles after which the sample reaches a pre-determined threshold. The threshold in turn is set at 10x the standard deviation of the baseline.

                        It may indeed be a good idea to use this value as the start of the logarithmic phase. This would remove half the problem, so that's a good start. The other half of the problem still exists in that you still need to determine the end of the logarithmic phase manually. Which may be quite hard, as competition for primer binding increases gradually during the process, meaning that the measurement is most accurate at early phases of the log-phase (exactly the reason why the threshold for Cq/Ct is placed as low as possible).

                        Comment


                        • #57
                          Okay, I'll take a look at some data I have. The StepOne software does allow export of data at various levels of "rawness".

                          I have actually been down this path before. But it felt less like a "path" and more like wilderness for which I did not have a map. Also I had no idea whether the solution was there at all. Now I at least would have some sense that it should be...

                          --
                          Phillip

                          Originally posted by DaanV View Post
                          Yes, I'm familiar with the nomenclature of the S-shaped growth curves of microbes. Wasn't sure if I could apply the same names to these though. Log-linear phase seems like as decent a term as any.

                          Personally I use Bio-Rad CFX Touch and CFX manager. Judging by http://find.lifetechnologies.com/Glo...Update_FLR.pdf this link from Lifetechnologies, the Ct score you mention is the same as (or at least closely related to) Cq I have described above. It is basically the number of cycles after which the sample reaches a pre-determined threshold. The threshold in turn is set at 10x the standard deviation of the baseline.

                          It may indeed be a good idea to use this value as the start of the logarithmic phase. This would remove half the problem, so that's a good start. The other half of the problem still exists in that you still need to determine the end of the logarithmic phase manually. Which may be quite hard, as competition for primer binding increases gradually during the process, meaning that the measurement is most accurate at early phases of the log-phase (exactly the reason why the threshold for Cq/Ct is placed as low as possible).

                          Comment


                          • #58
                            Originally posted by DaanV View Post

                            It may indeed be a good idea to use this value as the start of the logarithmic phase. This would remove half the problem, so that's a good start. The other half of the problem still exists in that you still need to determine the end of the logarithmic phase manually. Which may be quite hard, as competition for primer binding increases gradually during the process, meaning that the measurement is most accurate at early phases of the log-phase (exactly the reason why the threshold for Cq/Ct is placed as low as possible).
                            From the data I am perusing it looks like the "efficiency" falls above the autothreshold and increases (even past 100%) below the autothreshold. So maybe an extrapolation of the of the "efficiency at the autothreshold"?

                            --
                            Phillip

                            Comment


                            • #59
                              Or maybe what we want is the Ct and the first derivative of the Ct. That would be the slope at that point. If the slope at the Ct doesn't match that of the standards (nearest standard?), hopefully one could do a correction based on that?

                              Comment


                              • #60
                                Kapa qPCR help

                                Thank for all the qPCR insights.

                                Our lab is having some trouble with running the Kapa SYBR qPCR kit and getting reproducible data. We also use the KAPA illumina standards. It could be an obvious problem, but we can't seem to pin it down. Here's our protocol:

                                Library Prep
                                -KAPA/Truseq illumina library construction
                                -1 ug input for illumina library prep.

                                qPCR
                                -KAPA SYBR Fast qPCR kit.
                                -KAPA illumina standards 1-6.
                                -library samples in triplicate.
                                -Kapa illumina standards in triplicate.

                                -Serially dilute libraries 1:125,000 (1:50, 1:50, 1:50) in 10 mM Tris-HCl pH 8.0 0.05% Tween 20. (The 1:50 dilution is 98 ul + 2 ul library vortex and repeat). We've found that lesser dilutions don't fall within the range of the Kapa illumina standards, and outside of the standard curve. We don't use multichannel for dilutions, only a p100 with a p-10 pipettor.

                                -Add 6 ul of Kapa SYBR Fast qPCR mix with primers to each well + 4 ul of diluted library/illumina standard (1-6). Should we be running a 20 ul qPCR rxn instead of 10 ul?

                                -Set up our step one software. Input standards at 6 in triplicate, 20 uM starting concentration and 1:10 standards dilution.

                                Attached are the resulting quantities of three preps plotted library sample versus pM calculated by the (qPCR mean quantity of each triplicate)*(bp range 452/500)*(dilution 125,000).

                                Hmm1 & Hmm2 seems to have abnormally high pMs. Normal1 is around the range which I'd expect? Don't mind all of the different pMs but only the average pM's. What are your thoughts?
                                Attached Files

                                Comment

                                Latest Articles

                                Collapse

                                • seqadmin
                                  Techniques and Challenges in Conservation Genomics
                                  by seqadmin



                                  The field of conservation genomics centers on applying genomics technologies in support of conservation efforts and the preservation of biodiversity. This article features interviews with two researchers who showcase their innovative work and highlight the current state and future of conservation genomics.

                                  Avian Conservation
                                  Matthew DeSaix, a recent doctoral graduate from Kristen Ruegg’s lab at The University of Colorado, shared that most of his research...
                                  03-08-2024, 10:41 AM
                                • seqadmin
                                  The Impact of AI in Genomic Medicine
                                  by seqadmin



                                  Artificial intelligence (AI) has evolved from a futuristic vision to a mainstream technology, highlighted by the introduction of tools like OpenAI's ChatGPT and Google's Gemini. In recent years, AI has become increasingly integrated into the field of genomics. This integration has enabled new scientific discoveries while simultaneously raising important ethical questions1. Interviews with two researchers at the center of this intersection provide insightful perspectives into...
                                  02-26-2024, 02:07 PM

                                ad_right_rmr

                                Collapse

                                News

                                Collapse

                                Topics Statistics Last Post
                                Started by seqadmin, 03-14-2024, 06:13 AM
                                0 responses
                                33 views
                                0 likes
                                Last Post seqadmin  
                                Started by seqadmin, 03-08-2024, 08:03 AM
                                0 responses
                                72 views
                                0 likes
                                Last Post seqadmin  
                                Started by seqadmin, 03-07-2024, 08:13 AM
                                0 responses
                                81 views
                                0 likes
                                Last Post seqadmin  
                                Started by seqadmin, 03-06-2024, 09:51 AM
                                0 responses
                                68 views
                                0 likes
                                Last Post seqadmin  
                                Working...
                                X