
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
Library quantification: opinions?  krobison  Sample Prep / Library Generation  41  06232016 07:38 PM 
Library quantification  suludana  Illumina/Solexa  22  10242013 04:52 PM 
KAPABIOSYSTEM library quantification kits  elena.85  Illumina/Solexa  0  10122011 06:07 AM 
Library Quantification Confusion!  peromhc  Sample Prep / Library Generation  9  10052011 08:18 AM 
3'UTR library or random primed cDNA library for quantification?  Rosanne82  Sample Prep / Library Generation  0  06262009 06:27 AM 

Thread Tools 
06142013, 12:04 PM  #41 
Junior Member
Location: nyc Join Date: Oct 2011
Posts: 3

Hello,
I have been quantifiying my libraries with the Kapa system and find that I have to dilute my libraries 1:16000 at least in order to get in a reliable range of the standard curve. Also, my calculated molarity is 35 times higher than what I get with a Nanodrop. Any insight into what might be going on would be greatly appreciated! Thanks in advance. 
06142013, 12:30 PM  #42 
Senior Member
Location: Oklahoma Join Date: Sep 2009
Posts: 411

I serially dilute my libraries to 1:10,000, 1:100,000, and 1:1,000,000 to fall within the standard curve (and run all three). Not a big deal. And I never trust anything a Nanodrop says.

06142013, 03:18 PM  #43 
Member
Location: Montana Join Date: Nov 2008
Posts: 21

I'm with GW_OK. A nanadrop really has no business in NGS as far as I am concerned. It measures everything, whereas the Kapa kit is measuring only "functional" molecules that will actually contribute to amplification. That is why you see a difference. If you could imagine, there is unligated product, single end ligated product, etc. which will contribute to A260, but not actually be functional to amplify in the Kapa kit (or on the slide).
I am typically at 1:50,000 dilution for my libraries. I used to run different dilutions, but have seen up to 20% difference in concentration due to the added dilution steps. The measurement is more accurate on Ct the further out you go, but reproducing that exact same dilution is questionable and has a higher deviation. I have found it best to focus on the reproducibility of technique, using fixed pipettors, watching whether I "blowout" on the pipettes, etc. After all, the hope is that what you do at the bench and calculate for the Kapa kit is reproducible when going back to your stock library tube. Doing it EXACTLY the same way every time is the key to repeated consistency. 
06162013, 01:21 PM  #44 
Senior Member
Location: Ireland Join Date: Jan 2009
Posts: 101

I echo GW_OK & DNA_DAN with regard to the nanodrop  should not be relied on for any library quantification at any stage. My advice if you want to quantify outside of your KAPA protocol use a QUBIT it is more sensitive and consistent than the nanodrop.

06172013, 12:41 AM  #45  
Junior Member
Location: The Netherlands Join Date: Dec 2012
Posts: 8

Quote:
I have found that indeed, you calculate different starting quantities (SQ) for different dilutions of the same sample. However, it should be kept in mind that SQ is calculated based on the PCRefficiency of the standard.. and not all libraries run through the reaction at the same efficiency. Now one might argue that the differences in efficiency between the standard and the sample are relatively small (most of the time <5 percentpoints difference) and can thus be neglected. I would like to stress though that the difference between 95% efficiency and 98% efficiency is not negligible after 20 cycles of qPCR (typical Ct score for a heavily diluted sample) due to the exponential nature of PCR: ((0.95+1)/(0.98+1))^20 = 0.74. I.e. if your standard has a 98% efficiency, and the sample has 95% efficiency, your calculated SQ value can be up to 25% off! For this reason, I always run duplicates of three dilutions (1.000x, 16.000x and 256.000x) for every sample and calculate the efficiency of each sample, then use this efficiency to make a more accurate estimation of the SQ. Using this method, the results for each subsequent dilution are much more consistent than when simply using the SQ values as calculated by the qPCR software (standard deviation of the 6 values is almost always <10% of SQ using my method). Of course this does mean that I'm limited to 1314 samples per plate of qPCR, but I find that this is only a minor investment compared to the improved accuracy. I hope this helps some people out. Last edited by DaanV; 06172013 at 06:35 AM. 

06172013, 08:09 AM  #46 
Member
Location: Montana Join Date: Nov 2008
Posts: 21

Does your PCR reaction efficiency change with a more dilute sample?
I've always looked at it as being relative to the standard. As long as the slope is always the same and the standards come off at relatively the same Ct values every time. Does it matter if the reaction is performing at say 50% efficiency if it amplifies at the 1E6 standard? Same is true regarding the qubit. Picogreen works relatively well but the you really have to watch the standards don't deviate. 
06172013, 07:22 PM  #47 
Member
Location: Indo Join Date: Oct 2011
Posts: 20

I too agree with all, that Nanodrop is not the reliable method of quantitation. We too rely on Picogreen and qPCR.We only use qPCR to know that we have well constructed libraries to start with and just to know the trend of concentration in comparison to the Picogreen concentration.
And yes, we do get concentration from qPCR which vary 25x times the picogreen concentration.We still stick with picogreen concentration adjusting a little with qPCR concentration. I am not sure if Im clear here....for eg. if my picogreen conc. is 2nM and qPCR conc. is 2.83.0nM,then I assume my library conc. to be somewhere @ 2.5nM,and then I use accordingly for clustering. As to why,qPCR overestimates your libraries,I believe its because each sample have there own amplification efficiency,which cannot be absolutely correlated with the standards being used. And regarding dilution factor,yes that also imposes a variation in quantification.I have been using PhiX and Kapa standards for quantification.PhiX at even high accuracy only shows 8590% Efficiency whereas Kapa standards give you more than 95% Efficiency. I too have diluted my libraries @1:50,000 or even upto 1:100,000,which have given a consistent conc. resulting into desired cluster numbers.Hope this helps 
06182013, 12:38 AM  #48  
Junior Member
Location: The Netherlands Join Date: Dec 2012
Posts: 8

Quote:
I use the serial dilutions to calculate the efficiency of my samples. After all, each 2x dilution should result in a Ct score of exactly 1 higher if the reaction is at 100% efficiency (since the amount of DNA is doubled in each cycle). If it is slower than that, this can be used to calculate the efficiency of that sample. Quote:
That's the whole point of the qPCR software > The assumption is made that the slope/efficiency of the standard series is the same as that of your sample. I find that this is not usually the case, which might explain your previous statement that the SQ values you obtain for your serial dilutions don't always match up. I've actually worked this out in reasonable detail for my internship project. I find that my results have become much more consistent since I implemented this new method of calculation. If anyone has any questions regarding this, feel free to send a PM or state it here. Last edited by DaanV; 06182013 at 05:58 AM. Reason: Extra clarification 

06182013, 08:26 AM  #49  
Senior Member
Location: Purdue University, West Lafayette, Indiana Join Date: Aug 2008
Posts: 2,317

Quote:
 Phillip 

06182013, 09:03 AM  #50 
Member
Location: Montana Join Date: Nov 2008
Posts: 21

I'm not that qPCR saavy but I get what you're saying. I definitely would like to take a look at what you have. Anything that makes this more consistent is a win for everyone on the forums.

06182013, 09:17 AM  #51 
Senior Member
Location: Purdue University, West Lafayette, Indiana Join Date: Aug 2008
Posts: 2,317

I have noticed qPCR results that seemed to suggest that the slope of the standards was not the same as the slope of a given library. Since one of the axes of this graph is dilution, maybe reaction efficiency the key.
I am definitely not a qPCR guy either. But I get the feeling this is something that would be blindingly obvious to a qPCR guy.  Phillip Last edited by pmiguel; 06182013 at 10:56 AM. 
06192013, 01:10 AM  #52 
Junior Member
Location: The Netherlands Join Date: Dec 2012
Posts: 8

Right.. Hold on tight then, as this may take quite a bit of explaining, please bear with me. If you care only about the results, feel free to skip to the bottom of this post (down to where it says "Summary" in big blocky letters).
So let's start with a description of a general PCR reaction: Q = SQ * (E+1)^C With: Q = the DNA quantity in the sample after C cycles SQ = starting quantity (equal to Q at C=0) C = number of cycles of PCR E = PCR efficiency of that library (depending on GC content, fragment length distribution, and possibly other variables) In the case of perfect replication (E=1, so that Q = SQ * 2^C), the amount of DNA is exactly doubled after every cycle of replication. Now, as you may know for qPCR: The Cq values are the values that correspond to the number of cycles after which a certain level of fluorescence is measured (which stands in direct relation to the amount of double stranded DNA). So, when this is the case for any sample, we can rewrite the above equation as: Qq = SQ * (E+1)^Cq With: Qq = quantification quantity, or the predetermined level of fluorescence that a sample must reach. Cq = quantification cycle, or the cycle at which Qq is reached. It should be noted that Qq is the same for every sample (Qq1=Qq2). Now, let's take 1 dilution series of a sample with a known molarity (the standard). The efficiency of your standard is given by the slope of the graph of Cq vs log(SQ), and can be calculated as: E = 10^(1/m)  1 With: m = the slope of Cq vs log(SQ) (Normally the efficiency of your standard is given by the software you use, you can use this to check if you're calculating it correctly. Mathematical deduction of this relation can be found further down.) So yeah Phillip, you're right in assuming that the inequality of slopes of standards and libraries is caused directly by efficiency. Additionally, the yintersection of the graph of Cq vs log(SQ) of your standard series (the value of Cq when log(SQ) = 0) will be called UC for Unit Cycle for now (when log(SQ)=0, SQ=1, hence the name). Now let's evaluate a library with unknown SQ. We'll call the standard sample 1, and the library sample 2. Qq1 = SQ1 * (E1 + 1)^Cq1 Qq2 = SQ2 * (E2 + 1)^Cq2 (Qq1/Qq2) = (SQ1/SQ2) * ((E1 + 1)^Cq1 / (E2 + 1)^Cq2) Now let's substitute SQ1 with 1, so that Cq1 = UC Also note that Qq1=Qq2 1 = (1/SQ2) * ((E1 + 1)^UC / (E2 + 1)^Cq2) Bringing SQ2 to the left hand side results in: SQ2 = ((E1 + 1)^UC / (E2 + 1)^Cq2) This then gives us an accurate relation to equate SQ2 with. Note that E1 and UC are given by the standard series, while E2 is given by the library. Calculating SQ2 for the 6 different values of Cq2 (resulting from 3 dilutions in duplo) and taking into account the different dilutions should result in 6 nearequal values for SQ2. This is the equation I use. Relation to software This part will detail the difference between my method and the method commonly applied in qPCR software. I have found that the software uses a simplifying assumption. The assumption is made that the slope of the standard series is a good approximation of the slopes of all libraries. In other words, they assume that the standards and libraries run with nearequal efficiencies. Using that assumption, the above relation can be rewritten as follows: SQ2 = (E1 + 1)^(UC  Cq2) Since a^b/a^c = a^(bc) Using this equation, I get the exact values for SQ as the software does. It should be clear to see that this assumption goes awry when E1 does not equal E2. As I demonstrated in my first post, even seemingly minor differences can lead to huge differences due to the exponential nature of the process. Additional information This is really a part that you don't need to read in order to understand the above. I just thought I'd share it in case anyone is interested. The above relation can be rewritten generally as: (E + 1)_LOG(SQ) = UC  Cq With: (E + 1)_LOG(SQ) = the logarithm of SQ with base (E + 1) Since: a = b^c > b_LOG(a) = c Leading to: Cq =  LOG(SQ)/LOG(E + 1) + UC Since: b_LOG(a) = LOG(a)/LOG(b) If we define: m =  1 / LOG(E + 1) We can clearly see that this is a constant (assuming that E is constant). This also gives us a relation to equate E with when we have the slope: E = 10^(1/m)  1 As noted earlier. Substituting m into the previous equation leads to: Cq = m * LOG(SQ) + UC Which makes it immediately obvious that the graph as plotted by Cq versus LOG(SQ) is linear, with slope m and yintersect UC. SUMMARY So here's a basic stepbystep of what I do: 1) Run each qPCR plate with a standard dilution series, and run libraries with dilutions 1,000x, 16,000x and 256,000x in duplo. 2) Calculate the slope (m) and yintersect (UC) of Cq vs LOG(SQ) of the standard dilution series. The Excel LINEST function is very useful here. 3) Calculate the slope m of the Cq vs LOG(dilution) of the libraries in a similar manner (note that LOG(dil) is equivalent to LOG(SQ)). 4) Calculate E for the standard and all libraries as: E = 10^(1/m)  1 5) Calculate SQ for all libraries as: SQ = (Es + 1)^UC / (El + 1)^Cq For all dilutions of the same library. Multiply SQ by the dilution factor to obtain the molarity of your sample. Average over all 6 values. With Es as E of the standard and El as E of the library Optionally: 6) Calculate the relative standard deviation as a check to see how 'reliable' your values are. Just for fun: 7) Also calculate the relative standard deviation of SQ over the various dilutions as calculated by the software and note the differences.  Ok, so this has potentially become a bit long winded. I just thought I'd give all the information in case anyone was interested. I hope that the idea has come across though. Please don't be afraid to ask any questions. 
06192013, 05:07 AM  #53 
Senior Member
Location: Purdue University, West Lafayette, Indiana Join Date: Aug 2008
Posts: 2,317

Hi DaanV,
Thanks for the detailed explanation. I am still studying it. But one question does spring to mind: does determining efficiency require a dilution series? Is it not possible to measure the efficiency directly by measuring the increase in fluorescence each cycle of a single reaction? That is, if the signal is exactly doubling each cycle, then the efficiency is 100%. Again, I am not a qPCR guy, so the above may be naive.  Phillip 
06192013, 05:50 AM  #54 
Junior Member
Location: The Netherlands Join Date: Dec 2012
Posts: 8

Hey Phillip,
You're welcome. It's my pleasure to finally be able to contribute something to SEQanswers. Your question is valid, and it is indeed one I did pursue at some stage during my internship. In theory you are of course entirely right. And indeed, with more effort it may even be possible to do it (though I've not put in the dedication to see how robust the method is). In essence, the problem you run into is that the figures aren't logarithmic over the full range of the process. At the early stages, I think this is caused by the lower detection limit of the camera, and this is seen as the values for RFU (Relative Fluorescence Units) fluctuating around 0 (+/30 or so) for the first bunch of cycles (few cycles for high concentration samples, more for dilutions). Then at the end of the process the curve flattens again. I suppose this is caused by the reaction running out of nucleotides/primers. Worth a test perhaps, seeing if adding more of either of them increases the maximum value obtained. These two effects combined result in the characteristic "S" shaped curves that you find with qPCR. Only the truly logarithmic part in between (which typically only lasts for 68 cycles) can be used to calculate the efficiency with. The efficiency you find then depends on exactly which cycles you decide to include or exclude from this 'logarithmic phase', which to my tastes becomes a bit too arbitrary and prone to user bias variation. I hope this clarifies. Of course you're free to pursue the idea, as I'd love to be proven wrong. A quick test on some of my own data indicates that the acquired score for E is at least in the range where I expect it to be. 
06192013, 06:28 AM  #55  
Senior Member
Location: Purdue University, West Lafayette, Indiana Join Date: Aug 2008
Posts: 2,317

Quote:
We have a Lifetech (also know as "Applied Biosystems" and "Invitrogen") Step One qPCR machine. It seems to search for an early part of the "log phase" via some algorithm and calls this the "Ct" for "cycle threshold". This may just be another name for one of the parameters you describe above. Anyway, to the extent this is a reasonable prediction of the beginning of "log phase", the efficiency of the reaction calculation maybe correct. The issue here is just the obvious one  needing to triple the number of qPCR reactions would likely lead to a substantial increase in our costs. Especially as this instrument has recently turned into quite a bottleneck at times. Actually we have some aberrant clustering results  specifically intrapool  that we could examine to see if the efficiency metric predicted issues we see. Again, thanks for your insights. This has been an issue for us for years now. Hopefully this will get us nearer to managing it.  Phillip 

06192013, 07:44 AM  #56 
Junior Member
Location: The Netherlands Join Date: Dec 2012
Posts: 8

Yes, I'm familiar with the nomenclature of the Sshaped growth curves of microbes. Wasn't sure if I could apply the same names to these though. Loglinear phase seems like as decent a term as any.
Personally I use BioRad CFX Touch and CFX manager. Judging by http://find.lifetechnologies.com/Glo...Update_FLR.pdf this link from Lifetechnologies, the Ct score you mention is the same as (or at least closely related to) Cq I have described above. It is basically the number of cycles after which the sample reaches a predetermined threshold. The threshold in turn is set at 10x the standard deviation of the baseline. It may indeed be a good idea to use this value as the start of the logarithmic phase. This would remove half the problem, so that's a good start. The other half of the problem still exists in that you still need to determine the end of the logarithmic phase manually. Which may be quite hard, as competition for primer binding increases gradually during the process, meaning that the measurement is most accurate at early phases of the logphase (exactly the reason why the threshold for Cq/Ct is placed as low as possible). 
06192013, 08:20 AM  #57  
Senior Member
Location: Purdue University, West Lafayette, Indiana Join Date: Aug 2008
Posts: 2,317

Okay, I'll take a look at some data I have. The StepOne software does allow export of data at various levels of "rawness".
I have actually been down this path before. But it felt less like a "path" and more like wilderness for which I did not have a map. Also I had no idea whether the solution was there at all. Now I at least would have some sense that it should be...  Phillip Quote:


06192013, 09:47 AM  #58  
Senior Member
Location: Purdue University, West Lafayette, Indiana Join Date: Aug 2008
Posts: 2,317

Quote:
 Phillip 

06192013, 11:22 AM  #59 
Senior Member
Location: Purdue University, West Lafayette, Indiana Join Date: Aug 2008
Posts: 2,317

Or maybe what we want is the Ct and the first derivative of the Ct. That would be the slope at that point. If the slope at the Ct doesn't match that of the standards (nearest standard?), hopefully one could do a correction based on that?

06202013, 06:51 AM  #60 
Member
Location: Cambridge Join Date: Nov 2012
Posts: 21

Kapa qPCR help
Thank for all the qPCR insights.
Our lab is having some trouble with running the Kapa SYBR qPCR kit and getting reproducible data. We also use the KAPA illumina standards. It could be an obvious problem, but we can't seem to pin it down. Here's our protocol: Library Prep KAPA/Truseq illumina library construction 1 ug input for illumina library prep. qPCR KAPA SYBR Fast qPCR kit. KAPA illumina standards 16. library samples in triplicate. Kapa illumina standards in triplicate. Serially dilute libraries 1:125,000 (1:50, 1:50, 1:50) in 10 mM TrisHCl pH 8.0 0.05% Tween 20. (The 1:50 dilution is 98 ul + 2 ul library vortex and repeat). We've found that lesser dilutions don't fall within the range of the Kapa illumina standards, and outside of the standard curve. We don't use multichannel for dilutions, only a p100 with a p10 pipettor. Add 6 ul of Kapa SYBR Fast qPCR mix with primers to each well + 4 ul of diluted library/illumina standard (16). Should we be running a 20 ul qPCR rxn instead of 10 ul? Set up our step one software. Input standards at 6 in triplicate, 20 uM starting concentration and 1:10 standards dilution. Attached are the resulting quantities of three preps plotted library sample versus pM calculated by the (qPCR mean quantity of each triplicate)*(bp range 452/500)*(dilution 125,000). Hmm1 & Hmm2 seems to have abnormally high pMs. Normal1 is around the range which I'd expect? Don't mind all of the different pMs but only the average pM's. What are your thoughts? 
Thread Tools  

