SEQanswers

Go Back   SEQanswers > Sequencing Technologies/Companies > Illumina/Solexa



Similar Threads
Thread Thread Starter Forum Replies Last Post
Library quantification: opinions? krobison Sample Prep / Library Generation 41 06-23-2016 06:38 PM
Library quantification suludana Illumina/Solexa 22 10-24-2013 03:52 PM
KAPABIOSYSTEM library quantification kits elena.85 Illumina/Solexa 0 10-12-2011 05:07 AM
Library Quantification Confusion! peromhc Sample Prep / Library Generation 9 10-05-2011 07:18 AM
3'UTR library or random primed cDNA library for quantification? Rosanne82 Sample Prep / Library Generation 0 06-26-2009 05:27 AM

Reply
 
Thread Tools
Old 06-20-2013, 06:38 AM   #61
pmiguel
Senior Member
 
Location: Purdue University, West Lafayette, Indiana

Join Date: Aug 2008
Posts: 2,317
Default

Quote:
Originally Posted by Isequencestuff View Post
[...]

Attached are the resulting quantities of three preps plotted library sample versus pM calculated by the (qPCR mean quantity of each triplicate)*(bp range 452/500)*(dilution 125,000).

Hmm1 & Hmm2 seems to have abnormally high pMs. Normal1 is around the range which I'd expect? Don't mind all of the different pMs but only the average pM's. What are your thoughts?
I can't figure out what the different bars in your barchart denote. Y-axis is concentration in pM. But you have 8 or 4 bars per chart at different levels? What are those? Also, two of your charts have two different color bars in them.

As far as some of your final libraries having much higher concentrations than you expected... (Like way above 10nM.) Illumina kits are designed to overkill. They have to be so that samples with lower quality or lower quantity will still produce something you can use. Typically we will take a look at the preamp libraries on the bioanalyzer and usually end up doing only 4 cycles instead of 10.

--
Phillip
pmiguel is offline   Reply With Quote
Old 06-20-2013, 06:40 AM   #62
DaanV
Junior Member
 
Location: The Netherlands

Join Date: Dec 2012
Posts: 8
Default

Quote:
Originally Posted by pmiguel
Or maybe what we want is the Ct and the first derivative of the Ct. That would be the slope at that point. If the slope at the Ct doesn't match that of the standards (nearest standard?), hopefully one could do a correction based on that?
That's an interesting thought. I've delved into it but couldn't quite figure it out. Perhaps you can help out where I get stuck?

Here's what I've got:

Q(C) = SQ * (E+1)^C

Upon crossing the threshold:
Q(Ct) = Qt = SQ * (E+1)^Ct

With:
Qt = The threshold value of the DNA quantity/RFU
SQ = Starting quantity of DNA
E = Reaction efficiency
Ct = The number of cycles after which the threshold is reached


The derivative of this function is:

Qt' = (E+1)^Ct * ln(E+1)

Since:
y(x) = m * a^x
y'(x) = ln(a) * a^x


Which gives us two equations with three unknowns (SQ, E and Qt' are unknown). Qt should in theory be known (it's the threshold value), but I have yet to find it in the software.

So I figured that perhaps I could get an approximation of Qt' from the slope of the function at that point. The problem I run into though is that we don't really get a lot of measurement points (no more than exactly 1 per cycle). Also I couldn't find the value for Qt (the height of the threshold), but that's bound to be saved somewhere.

So yeah, we could evaluate the slope in [Qt,Ct] by dividing RFU of the measurement point after Ct by that of the point before Ct. In general:

Q(C+1)/Q(C) = (SQ * (E+1)^(C+1)) / (SQ * (E+1)^C) = (SQ / SQ) * (E+1)^(C+1-C) = (E+1)

Of course, this is only true for a perfectly logarithmic relation, which we know is not the case for us. We assume that our relation is 'most logarithmic' around Qt, so I've gone and calculated the ratio for 6 samples, and it continuously underestimated the efficiency (assuming that my previously described method is more correct than this estimation).

So the approach towards the derivative might actually work if we had more measurement points. But we don't. We could interpolate between the two values, but in order to do that accurately we would need an efficiency value..

In other words, I'm a bit stuck. I'd like to see someone else's approach to a solution.


Quote:
Originally Posted by Isequencestuff
What are your thoughts?
Hard to say based on only the results. I tend to go with 30uL reactions even (2uL sample/library, 8uL dilution buffer and 20uL Bio-Rad iQ SYBR green mix). I did test different reaction volumes (down to 15uL total volume), but I found that my results were less consistent that way. On the other hand, one would expect that the results within each plate would then become less consistent, not that the results of a whole plate become much higher than expected.
DaanV is offline   Reply With Quote
Old 06-20-2013, 07:34 AM   #63
pmiguel
Senior Member
 
Location: Purdue University, West Lafayette, Indiana

Join Date: Aug 2008
Posts: 2,317
Default

What is the ROX for in a SYBR green qPCR reaction?

--
Phillip
pmiguel is offline   Reply With Quote
Old 06-20-2013, 07:55 AM   #64
TonyBrooks
Senior Member
 
Location: London

Join Date: Jun 2009
Posts: 298
Default

Quote:
Originally Posted by pmiguel View Post
What is the ROX for in a SYBR green qPCR reaction?

--
Phillip
AFAIK, ROX is used as a passive reference. It takes no part in the PCR, but is used to normalise samples (based on the volume of input master mix). I guess SYBR has some background fluoresence that needs to be accounted for.
This is what it's used for in end-point assays (TaqMan SNP assays). You can really see the difference if you don't use it.
TonyBrooks is offline   Reply With Quote
Old 06-20-2013, 07:58 AM   #65
pmiguel
Senior Member
 
Location: Purdue University, West Lafayette, Indiana

Join Date: Aug 2008
Posts: 2,317
Default

To restate what has mainly already been said:

From first principles, SYBR green qPCR on a StepOne relates:

the cycle at which the change in fluorescence signal reaches some threshold value (the "Ct" or "Cq")

with

an initial concentration of amplicon.

To state the obvious, the lower the initial concentration, the more cycles it takes to reach that threshold change in fluorescence.

But any factor hindering the amplification would result in a less than 2x increase in the amount of double stranded DNA per cycle. The percentage of the theoretical maximum increase in DNA is the "efficiency". If the efficiency of amplification up to the Ct differs between samples and the standards, then the calculated initial concentration will not be accurate.

DaanV actively tests the efficiencies of amplification of each sample and is able to correct for differences in those efficiencies. However this requires additional qPCR reactions that I would prefer to avoid.

My hope would be that the efficiency of an amplification could be measured by a single reaction. On the surface this seems reasonable -- just check how closely the increase in fluorescence after a cycle match 2x. However when I do that I find that this delta changes each cycle. It starts close to 100% (or even higher) and drops each cycle.

Anyway, now I am beginning to question the Rn and deltaRn data in the "amplification data" exported from the StepOne software. So, I am taking a look at the "raw data" it is based upon.

--
Phillip
pmiguel is offline   Reply With Quote
Old 06-20-2013, 08:04 AM   #66
Isequencestuff
Member
 
Location: Cambridge

Join Date: Nov 2012
Posts: 21
Default qPCR Quantification

Quote:
Originally Posted by pmiguel View Post
I can't figure out what the different bars in your barchart denote. Y-axis is concentration in pM. But you have 8 or 4 bars per chart at different levels? What are those? Also, two of your charts have two different color bars in them.

As far as some of your final libraries having much higher concentrations than you expected... (Like way above 10nM.) Illumina kits are designed to overkill. They have to be so that samples with lower quality or lower quantity will still produce something you can use. Typically we will take a look at the preamp libraries on the bioanalyzer and usually end up doing only 4 cycles instead of 10.

--
Phillip
The different bars denote individual libraries, and the colors denote a different library preparation. I would disregard all of the colors and different bars, as these libraries were prepped with different kits too.

At 1 ug we get plenty of yield so amplification isn't necessary, however accurate quantification is still a problem. Qubit combined with Bioanalyzer data sums up that we have high quality fragments and our library DNA is still present at a high concentration. I'm thinking that our errors reside in qPCR.

I'm concerned with the abnormally high pMs that I get with prepping 1 ug started gDNA. I've attached an example of another qPCR run that I repeated. They give me an extreme range in concentration difference, yet they are the exact same dilution. Repeatability is a problem we're having, which in the library prep industry it shouldn't be.

If I were to compare one qPCR plate's data to another qPCR plate's data, they should be relatively close, correct?
Attached Images
File Type: jpg qPCR trial 1.JPG (19.3 KB, 14 views)
File Type: jpg qPCR trial 2.JPG (12.9 KB, 9 views)

Last edited by Isequencestuff; 06-20-2013 at 08:09 AM.
Isequencestuff is offline   Reply With Quote
Old 06-20-2013, 08:08 AM   #67
pmiguel
Senior Member
 
Location: Purdue University, West Lafayette, Indiana

Join Date: Aug 2008
Posts: 2,317
Default

Quote:
Originally Posted by DaanV View Post
That's an interesting thought. I've delved into it but couldn't quite figure it out. Perhaps you can help out where I get stuck?

[...]

So the approach towards the derivative might actually work if we had more measurement points. But we don't. We could interpolate between the two values, but in order to do that accurately we would need an efficiency value..

In other words, I'm a bit stuck. I'd like to see someone else's approach to a solution.
Yes I was just thinking of doing an interpolation between the two points surrounding the Ct.

But in thinking about it, there are problems. Specifically the cycle to cycle increase in signal seems to diminish as cycle number goes up. In essence all data past the Ct is discarded anyway. So only the amplification efficiency up to the Ct would effect the Ct. But as the efficiency of amplification appears to be changing as cycle number increases, I am having doubts as to the utility of the metric "estimated slope at the Ct".

Also I don't know what the software is doing with the ROX signal but it is apparently making corrections. But as to whether they are well to well, cycle to cycle or both, I am not clear. Probably I should read the reference you linked to earlier.

--
Phillip
pmiguel is offline   Reply With Quote
Old 06-20-2013, 08:08 AM   #68
Isequencestuff
Member
 
Location: Cambridge

Join Date: Nov 2012
Posts: 21
Default

Quote:
Originally Posted by DaanV View Post
Hard to say based on only the results. I tend to go with 30uL reactions even (2uL sample/library, 8uL dilution buffer and 20uL Bio-Rad iQ SYBR green mix). I did test different reaction volumes (down to 15uL total volume), but I found that my results were less consistent that way. On the other hand, one would expect that the results within each plate would then become less consistent, not that the results of a whole plate become much higher than expected.
Good point. I think trying larger volumes is worth investigating.

Last edited by Isequencestuff; 06-20-2013 at 08:10 AM.
Isequencestuff is offline   Reply With Quote
Old 06-20-2013, 08:12 AM   #69
pmiguel
Senior Member
 
Location: Purdue University, West Lafayette, Indiana

Join Date: Aug 2008
Posts: 2,317
Default

Quote:
Originally Posted by Isequencestuff View Post
The different bars denote individual libraries, and the colors denote a different library preparation. I would disregard all of the colors and different bars, as these libraries were prepped with different kits too.

At 1 ug we get plenty of yield so amplification isn't necessary, however accurate quantification is still a problem. Qubit combined with Bioanalyzer data sums up that we have high quality fragments and our library DNA is still present at a high concentration. I'm thinking that our errors reside in qPCR.

I'm concerned with the abnormally high pMs that I get with prepping 1 ug started gDNA. I've attached an example of another qPCR run that I repeated. They give me an extreme range in concentration difference, yet they are the exact same dilution. Repeatability is a problem we're having, which in the library prep industry it shouldn't be.

If I were to compare one qPCR plate's data to another qPCR plate's data, they should be relatively close, correct?
How are you quantitating the input DNA for library construction? Unless you are using fluorimetry with a double-strand specific fluor your quantitation will likely be inaccurate.

Just to be clear -- are you getting different results from qPCR quantitating the same library? That is what tells you if your qPCR is the source of the variance you see.

--
Phillip
pmiguel is offline   Reply With Quote
Old 06-20-2013, 08:29 AM   #70
pmiguel
Senior Member
 
Location: Purdue University, West Lafayette, Indiana

Join Date: Aug 2008
Posts: 2,317
Default

Quote:
Originally Posted by Isequencestuff View Post
I'm concerned with the abnormally high pMs that I get with prepping 1 ug started gDNA. I've attached an example of another qPCR run that I repeated. They give me an extreme range in concentration difference, yet they are the exact same dilution. Repeatability is a problem we're having, which in the library prep industry it shouldn't be.

If I were to compare one qPCR plate's data to another qPCR plate's data, they should be relatively close, correct?
Okay, about libraries. Theoretically 1 ug of DNA is 2 trillion 500 bp fragments. "nM" is roughly 0.6 billion molecules per ul. Presuming you end up with 40 ul of library at 10 nM that gives you a total of 24 billion molecules (amplicons). Roughly a 1% yield from your initial DNA amount. So, in principle, 40 nM is doable.

Getting a 40 nM library without amplification from 1 ug of DNA makes you a star in my book. But that is just an indication of how wasteful library construction protocols are. Well, wasteful of input DNA. Most are optimized to be quick.

Then the thing is, yes, after you eliminate variability in library construction I still expect there to be enormous variability in the incoming DNA. How oxidized and abasic is it, for example? Does it happen to have epigentic modifications that choke some downstream enzyme? I am sure you can think of other factors as well. Factors that are essentially invisible to us.

--
Phillip
pmiguel is offline   Reply With Quote
Old 06-21-2013, 02:00 AM   #71
ETHANol
Senior Member
 
Location: Western Australia

Join Date: Feb 2010
Posts: 308
Default

Just a word from a current observation is that if I measure the same libraries, which are only about 15 nM, on different days with Qubit HS DNA kit the concentration vary by up to 50%. The relative concentration are more or less the same ratio to each other but the quantities differ from the previous reading. Since the relative ratio between the samples is more or less the same that rules out a pipetting problem. So that leaves the standard curve which I believe at least at the lower ranges is problematic. It makes sense as the standards are only zero DNA and the max quantity. The kit really should include some more standards. So I'm guessing it's best to make separate standards for each replicate. I will do for my next run.

Annoyed my first HiSeq run was over clustered and now my current one is on the low side. I'm hoping the next one will be just right.

Any opinions on Qubit HS DNA kit vs. PicoGreen?
__________________
--------------
Ethan
ETHANol is offline   Reply With Quote
Old 06-21-2013, 06:54 AM   #72
DaanV
Junior Member
 
Location: The Netherlands

Join Date: Dec 2012
Posts: 8
Default

I have just returned from a European roadshow from Bio-Rad, who have developed a new technique (or at the very least I wasn't priorly aware of it) called Digital Droplet PCR (ddPCR).

I'm sure other people will be better able to explain the specifics and details, but it did seem to be quite interesting in that it's independent of the reaction efficiency. http://www.bio-rad.com/prd/en/US/LSR...tal-PCR-System --> Scroll down in the 'Overview' menu to the 'ddPCR Technology and Workflow' to see a basic explanation of the technique.

A very basic explanation:
ddPCR relates to qPCR in that qPCR is a relative quantitation (you need a calibration curve in each new experiment). ddPCR is all about absolute quantitation.

A 20uL sample is divided into 20k droplets (1nL/droplet), separated by oil. The resulting emulsion is subjected to 40 cycles of regular PCR (so no RT measurements). This means that instead of measuring efficiency-dependent quantities like Ct, they only do end-point measurements. In other words, the whole thing gets reduced to a yes/no analysis: Either the target sequence is in the droplet, or it is not.

This yes/no analysis is then repeated for the 20k droplets. So say there are only 100 molecules of a certain fragment in a sample, then you would expect 90-100 droplets that give a positive signal (it's impossible to distinguish between droplets that have a single or a double copy in them, as only the end-point is seen).

When there are 10k target fragments in the 20k droplets, apparently you'd expect about 1/3 to overlap according to Poisson distribution. So you would expect about 6.6k positive signals in this case.

Anyway, long story short: ddPCR is very nice for some nifty applications (Copy Number Variation was an arrowhead of the presentation, as was detection of very rare events). The added benefits for NGS purposes were somewhat limited to my mind. Yes, ddPCR is a lot more accurate to determine very small differences in molarity. It's also more accurate when it comes to very small amounts in general of the target sequence.

Taking into account the extra costs involved with adopting a new system though, I don't think this will make its way into the standard equipment set of NGS laboratories. But I may be overlooking something, of course.
DaanV is offline   Reply With Quote
Old 06-21-2013, 11:39 AM   #73
Number6
Member
 
Location: NY

Join Date: Feb 2009
Posts: 20
Default

Hi DaanV....you have come to this conclusion like many of us have. Digital PCR should do a much better job of quantifying the libraries. In fact, I've just ordered an instrument in the hope that it will. I am convinced that if the price point is low enough, a digital PCR instrument will be as common in NGS laboratories as the Qubit is now.
Number6 is offline   Reply With Quote
Old 06-21-2013, 02:25 PM   #74
austinso
Member
 
Location: Bay area

Join Date: Jun 2012
Posts: 77
Default

Using ddPCR for library quant, I would say that the cost is actually comparable with BioA+KAPA analysis. It is really about the initial overhead cost, but it is not a one trick pony (i.e. also good for validating sequencing results).

The key advantage is that you are measuring molecules/ul, which is a direct conversion to molarity. There are no fudge factors based on relative lengths, etc. In fact, you can model cluster density with the concentration called in ddPCR extremely well.

The general caveat with library quant lies in the accuracy of the dilution series, and the number of dilutions made prior to making the measurement (with any instrument or method I might add). As long as one is consistent in their technique, then it is less of an issue, and a fudge factor can be applied to hit the right range.

For qPCR, there are excellent programs for analyzing individual PCR curves to infer starting quantities. However, there is typically only a small window in which PCR amplification is log-linear, and that tends to vary. The biggest problem is extrapolation from log space to linear space, which can cause a huge variability in concentration calls. Ct is way too arbitrary IMHO.

FWIW
austinso is offline   Reply With Quote
Old 06-23-2013, 11:37 PM   #75
DaanV
Junior Member
 
Location: The Netherlands

Join Date: Dec 2012
Posts: 8
Default

Quote:
Originally Posted by austinso View Post
Using ddPCR for library quant, I would say that the cost is actually comparable with BioA+KAPA analysis. It is really about the initial overhead cost, but it is not a one trick pony (i.e. also good for validating sequencing results).
Really? That's interesting news indeed. I had figured that with all the extra chemicals to keep the droplets from merging and the necessity to keep the droplets at exactly 1nL, the costs would be significantly higher than what we already have. I suppose though that ruling out the very costly standard series is a major advantage to ddPCR.

I agree on the additional advantages of ddPCR. It's very useful for some applications.

Quote:
Originally Posted by austinso View Post
In fact, you can model cluster density with the concentration called in ddPCR extremely well.
Could you possibly provide us with some data indicating this? The data that Bio-Rad showed was good, but not stunning in my opinion. They showed us a graph with Cluster Density PF for samples quantitated by both qPCR and ddPCR. Sure, the error bar for ddPCR was significantly smaller, but it was still very much present. What would you define 'extremely well'?

Quote:
Originally Posted by austinso View Post
For qPCR, there are excellent programs for analyzing individual PCR curves to infer starting quantities. However, there is typically only a small window in which PCR amplification is log-linear, and that tends to vary. The biggest problem is extrapolation from log space to linear space, which can cause a huge variability in concentration calls. Ct is way too arbitrary IMHO.
Would you mind pointing me to such a program? I would be very much interested in comparing it with my own method. Thanks in advance for any help provided.
DaanV is offline   Reply With Quote
Old 06-24-2013, 09:21 AM   #76
austinso
Member
 
Location: Bay area

Join Date: Jun 2012
Posts: 77
Default

Quote:
Originally Posted by DaanV View Post
Really? That's interesting news indeed. I had figured that with all the extra chemicals to keep the droplets from merging and the necessity to keep the droplets at exactly 1nL, the costs would be significantly higher than what we already have. I suppose though that ruling out the very costly standard series is a major advantage to ddPCR.

I agree on the additional advantages of ddPCR. It's very useful for some applications.
They didn't do a good explaining, did they? The cost is about U$5 per well. For library quant you really need 4 dilutions to measure, where the last two are likely to be within the dynamic range of the instrument (<100,000 copies/ul or <0.16 pM).

Quote:
Could you possibly provide us with some data indicating this? The data that Bio-Rad showed was good, but not stunning in my opinion. They showed us a graph with Cluster Density PF for samples quantitated by both qPCR and ddPCR. Sure, the error bar for ddPCR was significantly smaller, but it was still very much present. What would you define 'extremely well'?
I'd like to but I don't know if I can. Basically, you can plot cluster density against input concentration determined by ddPCR, and get a nice curve. Between 5 pM and 10 pM you will essentially get the same number of Q30 reads on a Miseq, though the %PF may decrease. That may have changed with the V2 chemistry though. I'm assuming the same is true for the HiSeq.

The error is due to dilution error, and extrapolating back to the starting concentration. You can get a sense of this by pipetting water onto an analytical balance, and measuring the average mass dispensed to give a sense of your pipetting accuracy and more importantly, bias. This doesn't account for wetting properties of the tip plastic for the DNA solution.

Quote:
Would you mind pointing me to such a program? I would be very much interested in comparing it with my own method. Thanks in advance for any help provided.
Sure. In fact the one I use(d) the most was LinRegPCR, which is from a group in the Netherlands.

A good resource for qPCR (though hard to navigate) that will link to various methods of analysis is here

Best

Austin
austinso is offline   Reply With Quote
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off




All times are GMT -8. The time now is 12:42 PM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2020, vBulletin Solutions, Inc.
Single Sign On provided by vBSSO