SEQanswers

Go Back   SEQanswers > Sequencing Technologies/Companies > Illumina/Solexa



Similar Threads
Thread Thread Starter Forum Replies Last Post
Library quantification: opinions? krobison Sample Prep / Library Generation 41 06-23-2016 07:38 PM
Library quantification suludana Illumina/Solexa 22 10-24-2013 04:52 PM
KAPABIOSYSTEM library quantification kits elena.85 Illumina/Solexa 0 10-12-2011 06:07 AM
Library Quantification Confusion! peromhc Sample Prep / Library Generation 9 10-05-2011 08:18 AM
3'UTR library or random primed cDNA library for quantification? Rosanne82 Sample Prep / Library Generation 0 06-26-2009 06:27 AM

Reply
 
Thread Tools
Old 06-14-2013, 12:04 PM   #41
1230Rock
Junior Member
 
Location: nyc

Join Date: Oct 2011
Posts: 3
Default

Hello,

I have been quantifiying my libraries with the Kapa system and find that I have to dilute my libraries 1:16000 at least in order to get in a reliable range of the standard curve. Also, my calculated molarity is 3-5 times higher than what I get with a Nanodrop. Any insight into what might be going on would be greatly appreciated!

Thanks in advance.
1230Rock is offline   Reply With Quote
Old 06-14-2013, 12:30 PM   #42
GW_OK
Senior Member
 
Location: Oklahoma

Join Date: Sep 2009
Posts: 411
Default

I serially dilute my libraries to 1:10,000, 1:100,000, and 1:1,000,000 to fall within the standard curve (and run all three). Not a big deal. And I never trust anything a Nanodrop says.
GW_OK is offline   Reply With Quote
Old 06-14-2013, 03:18 PM   #43
DNA_Dan
Member
 
Location: Montana

Join Date: Nov 2008
Posts: 21
Default

I'm with GW_OK. A nanadrop really has no business in NGS as far as I am concerned. It measures everything, whereas the Kapa kit is measuring only "functional" molecules that will actually contribute to amplification. That is why you see a difference. If you could imagine, there is unligated product, single end ligated product, etc. which will contribute to A260, but not actually be functional to amplify in the Kapa kit (or on the slide).

I am typically at 1:50,000 dilution for my libraries. I used to run different dilutions, but have seen up to 20% difference in concentration due to the added dilution steps. The measurement is more accurate on Ct the further out you go, but reproducing that exact same dilution is questionable and has a higher deviation. I have found it best to focus on the reproducibility of technique, using fixed pipettors, watching whether I "blow-out" on the pipettes, etc. After all, the hope is that what you do at the bench and calculate for the Kapa kit is reproducible when going back to your stock library tube. Doing it EXACTLY the same way every time is the key to repeated consistency.
DNA_Dan is offline   Reply With Quote
Old 06-16-2013, 01:21 PM   #44
protist
Senior Member
 
Location: Ireland

Join Date: Jan 2009
Posts: 101
Default

I echo GW_OK & DNA_DAN with regard to the nanodrop - should not be relied on for any library quantification at any stage. My advice if you want to quantify outside of your KAPA protocol use a QUBIT it is more sensitive and consistent than the nanodrop.
protist is offline   Reply With Quote
Old 06-17-2013, 12:41 AM   #45
DaanV
Junior Member
 
Location: The Netherlands

Join Date: Dec 2012
Posts: 8
Default

Quote:
Originally Posted by DNA_Dan View Post
I used to run different dilutions, but have seen up to 20% difference in concentration due to the added dilution steps. The measurement is more accurate on Ct the further out you go, but reproducing that exact same dilution is questionable and has a higher deviation.
Just figured I would hop in in here to make my first ever post on this forum after a long time of lurking and learning (thanks for that all of you!).

I have found that indeed, you calculate different starting quantities (SQ) for different dilutions of the same sample. However, it should be kept in mind that SQ is calculated based on the PCR-efficiency of the standard.. and not all libraries run through the reaction at the same efficiency.

Now one might argue that the differences in efficiency between the standard and the sample are relatively small (most of the time <5 percent-points difference) and can thus be neglected. I would like to stress though that the difference between 95% efficiency and 98% efficiency is not negligible after 20 cycles of qPCR (typical Ct score for a heavily diluted sample) due to the exponential nature of PCR: ((0.95+1)/(0.98+1))^20 = 0.74. I.e. if your standard has a 98% efficiency, and the sample has 95% efficiency, your calculated SQ value can be up to 25% off!

For this reason, I always run duplicates of three dilutions (1.000x, 16.000x and 256.000x) for every sample and calculate the efficiency of each sample, then use this efficiency to make a more accurate estimation of the SQ. Using this method, the results for each subsequent dilution are much more consistent than when simply using the SQ values as calculated by the qPCR software (standard deviation of the 6 values is almost always <10% of SQ using my method).

Of course this does mean that I'm limited to 13-14 samples per plate of qPCR, but I find that this is only a minor investment compared to the improved accuracy.

I hope this helps some people out.

Last edited by DaanV; 06-17-2013 at 06:35 AM.
DaanV is offline   Reply With Quote
Old 06-17-2013, 08:09 AM   #46
DNA_Dan
Member
 
Location: Montana

Join Date: Nov 2008
Posts: 21
Default

Does your PCR reaction efficiency change with a more dilute sample?

I've always looked at it as being relative to the standard. As long as the slope is always the same and the standards come off at relatively the same Ct values every time. Does it matter if the reaction is performing at say 50% efficiency if it amplifies at the 1E6 standard?

Same is true regarding the qubit. Picogreen works relatively well but the you really have to watch the standards don't deviate.
DNA_Dan is offline   Reply With Quote
Old 06-17-2013, 07:22 PM   #47
Genquest
Member
 
Location: Indo

Join Date: Oct 2011
Posts: 20
Default

I too agree with all, that Nanodrop is not the reliable method of quantitation. We too rely on Picogreen and qPCR.We only use qPCR to know that we have well constructed libraries to start with and just to know the trend of concentration in comparison to the Picogreen concentration.
And yes, we do get concentration from qPCR which vary 2-5x times the picogreen concentration.We still stick with picogreen concentration adjusting a little with qPCR concentration. I am not sure if Im clear here....for eg. if my picogreen conc. is 2nM and qPCR conc. is 2.8-3.0nM,then I assume my library conc. to be somewhere @ 2.5nM,and then I use accordingly for clustering.
As to why,qPCR overestimates your libraries,I believe its because each sample have there own amplification efficiency,which cannot be absolutely correlated with the standards being used.
And regarding dilution factor,yes that also imposes a variation in quantification.I have been using PhiX and Kapa standards for quantification.PhiX at even high accuracy only shows 85-90% Efficiency whereas Kapa standards give you more than 95% Efficiency.
I too have diluted my libraries @1:50,000 or even upto 1:100,000,which have given a consistent conc. resulting into desired cluster numbers.Hope this helps
Genquest is offline   Reply With Quote
Old 06-18-2013, 12:38 AM   #48
DaanV
Junior Member
 
Location: The Netherlands

Join Date: Dec 2012
Posts: 8
Default

Quote:
Originally Posted by DNA_Dan View Post
Does your PCR reaction efficiency change with a more dilute sample?
The efficiency should not change for more diluted samples, no. If the efficiency were not constant, this would easily be detected by a non-linear graph of CQ vs log(SQ).

I use the serial dilutions to calculate the efficiency of my samples. After all, each 2x dilution should result in a Ct score of exactly 1 higher if the reaction is at 100% efficiency (since the amount of DNA is doubled in each cycle). If it is slower than that, this can be used to calculate the efficiency of that sample.

Quote:
Originally Posted by DNA_Dan View Post
I've always looked at it as being relative to the standard. As long as the slope is always the same and the standards come off at relatively the same Ct values every time. Does it matter if the reaction is performing at say 50% efficiency if it amplifies at the 1E6 standard?
It's not directly the efficiency of the standard series that matters, it's the difference between the efficiency of the standard series and your sample. So no, it doesn't matter if the standard series is performing at 50% efficiency, as long as the samples come with the exact same efficiency.

That's the whole point of the qPCR software --> The assumption is made that the slope/efficiency of the standard series is the same as that of your sample. I find that this is not usually the case, which might explain your previous statement that the SQ values you obtain for your serial dilutions don't always match up.

I've actually worked this out in reasonable detail for my internship project. I find that my results have become much more consistent since I implemented this new method of calculation. If anyone has any questions regarding this, feel free to send a PM or state it here.

Last edited by DaanV; 06-18-2013 at 05:58 AM. Reason: Extra clarification
DaanV is offline   Reply With Quote
Old 06-18-2013, 08:26 AM   #49
pmiguel
Senior Member
 
Location: Purdue University, West Lafayette, Indiana

Join Date: Aug 2008
Posts: 2,317
Default

Quote:
Originally Posted by DaanV View Post
I've actually worked this out in reasonable detail for my internship project. I find that my results have become much more consistent since I implemented this new method of calculation. If anyone has any questions regarding this, feel free to send a PM or state it here.
Sounds useful. Yes, please post it, please.

--
Phillip
pmiguel is offline   Reply With Quote
Old 06-18-2013, 09:03 AM   #50
DNA_Dan
Member
 
Location: Montana

Join Date: Nov 2008
Posts: 21
Default

I'm not that qPCR saavy but I get what you're saying. I definitely would like to take a look at what you have. Anything that makes this more consistent is a win for everyone on the forums.
DNA_Dan is offline   Reply With Quote
Old 06-18-2013, 09:17 AM   #51
pmiguel
Senior Member
 
Location: Purdue University, West Lafayette, Indiana

Join Date: Aug 2008
Posts: 2,317
Default

I have noticed qPCR results that seemed to suggest that the slope of the standards was not the same as the slope of a given library. Since one of the axes of this graph is dilution, maybe reaction efficiency the key.

I am definitely not a qPCR guy either. But I get the feeling this is something that would be blindingly obvious to a qPCR guy.

--
Phillip

Last edited by pmiguel; 06-18-2013 at 10:56 AM.
pmiguel is offline   Reply With Quote
Old 06-19-2013, 01:10 AM   #52
DaanV
Junior Member
 
Location: The Netherlands

Join Date: Dec 2012
Posts: 8
Default

Right.. Hold on tight then, as this may take quite a bit of explaining, please bear with me. If you care only about the results, feel free to skip to the bottom of this post (down to where it says "Summary" in big blocky letters).

So let's start with a description of a general PCR reaction:

Q = SQ * (E+1)^C

With:
Q = the DNA quantity in the sample after C cycles
SQ = starting quantity (equal to Q at C=0)
C = number of cycles of PCR
E = PCR efficiency of that library (depending on GC content, fragment length distribution, and possibly other variables)


In the case of perfect replication (E=1, so that Q = SQ * 2^C), the amount of DNA is exactly doubled after every cycle of replication.

Now, as you may know for qPCR: The Cq values are the values that correspond to the number of cycles after which a certain level of fluorescence is measured (which stands in direct relation to the amount of double stranded DNA). So, when this is the case for any sample, we can re-write the above equation as:

Qq = SQ * (E+1)^Cq

With:
Qq = quantification quantity, or the pre-determined level of fluorescence that a sample must reach.
Cq = quantification cycle, or the cycle at which Qq is reached.


It should be noted that Qq is the same for every sample (Qq1=Qq2).

Now, let's take 1 dilution series of a sample with a known molarity (the standard). The efficiency of your standard is given by the slope of the graph of Cq vs log(SQ), and can be calculated as:

E = 10^(-1/m) - 1

With:
m = the slope of Cq vs log(SQ)


(Normally the efficiency of your standard is given by the software you use, you can use this to check if you're calculating it correctly. Mathematical deduction of this relation can be found further down.)

So yeah Phillip, you're right in assuming that the inequality of slopes of standards and libraries is caused directly by efficiency.

Additionally, the y-intersection of the graph of Cq vs log(SQ) of your standard series (the value of Cq when log(SQ) = 0) will be called UC for Unit Cycle for now (when log(SQ)=0, SQ=1, hence the name).

Now let's evaluate a library with unknown SQ. We'll call the standard sample 1, and the library sample 2.

Qq1 = SQ1 * (E1 + 1)^Cq1
Qq2 = SQ2 * (E2 + 1)^Cq2

(Qq1/Qq2) = (SQ1/SQ2) * ((E1 + 1)^Cq1 / (E2 + 1)^Cq2)

Now let's substitute SQ1 with 1, so that Cq1 = UC
Also note that Qq1=Qq2

1 = (1/SQ2) * ((E1 + 1)^UC / (E2 + 1)^Cq2)

Bringing SQ2 to the left hand side results in:

SQ2 = ((E1 + 1)^UC / (E2 + 1)^Cq2)

This then gives us an accurate relation to equate SQ2 with. Note that E1 and UC are given by the standard series, while E2 is given by the library. Calculating SQ2 for the 6 different values of Cq2 (resulting from 3 dilutions in duplo) and taking into account the different dilutions should result in 6 near-equal values for SQ2. This is the equation I use.

Relation to software
This part will detail the difference between my method and the method commonly applied in qPCR software.

I have found that the software uses a simplifying assumption. The assumption is made that the slope of the standard series is a good approximation of the slopes of all libraries. In other words, they assume that the standards and libraries run with near-equal efficiencies.

Using that assumption, the above relation can be re-written as follows:

SQ2 = (E1 + 1)^(UC - Cq2)

Since a^b/a^c = a^(b-c)

Using this equation, I get the exact values for SQ as the software does. It should be clear to see that this assumption goes awry when E1 does not equal E2. As I demonstrated in my first post, even seemingly minor differences can lead to huge differences due to the exponential nature of the process.

Additional information
This is really a part that you don't need to read in order to understand the above. I just thought I'd share it in case anyone is interested.

The above relation can be re-written generally as:

(E + 1)_LOG(SQ) = UC - Cq

With:
(E + 1)_LOG(SQ) = the logarithm of SQ with base (E + 1)

Since:
a = b^c --> b_LOG(a) = c


Leading to:

Cq = - LOG(SQ)/LOG(E + 1) + UC

Since:
b_LOG(a) = LOG(a)/LOG(b)


If we define:

m = - 1 / LOG(E + 1)

We can clearly see that this is a constant (assuming that E is constant).
This also gives us a relation to equate E with when we have the slope:

E = 10^(-1/m) - 1

As noted earlier.

Substituting m into the previous equation leads to:

Cq = m * LOG(SQ) + UC

Which makes it immediately obvious that the graph as plotted by Cq versus LOG(SQ) is linear, with slope m and y-intersect UC.

SUMMARY
So here's a basic step-by-step of what I do:

1) Run each qPCR plate with a standard dilution series, and run libraries with dilutions 1,000x, 16,000x and 256,000x in duplo.

2) Calculate the slope (m) and y-intersect (UC) of Cq vs LOG(SQ) of the standard dilution series. The Excel LINEST function is very useful here.

3) Calculate the slope m of the Cq vs -LOG(dilution) of the libraries in a similar manner (note that -LOG(dil) is equivalent to LOG(SQ)).

4) Calculate E for the standard and all libraries as: E = 10^(-1/m) - 1

5) Calculate SQ for all libraries as: SQ = (Es + 1)^UC / (El + 1)^Cq
For all dilutions of the same library. Multiply SQ by the dilution factor to obtain the molarity of your sample. Average over all 6 values.
With Es as E of the standard and El as E of the library

Optionally:
6) Calculate the relative standard deviation as a check to see how 'reliable' your values are.

Just for fun:
7) Also calculate the relative standard deviation of SQ over the various dilutions as calculated by the software and note the differences.


-----------

Ok, so this has potentially become a bit long winded. I just thought I'd give all the information in case anyone was interested.

I hope that the idea has come across though. Please don't be afraid to ask any questions.
DaanV is offline   Reply With Quote
Old 06-19-2013, 05:07 AM   #53
pmiguel
Senior Member
 
Location: Purdue University, West Lafayette, Indiana

Join Date: Aug 2008
Posts: 2,317
Default

Hi DaanV,
Thanks for the detailed explanation. I am still studying it. But one question does spring to mind: does determining efficiency require a dilution series? Is it not possible to measure the efficiency directly by measuring the increase in fluorescence each cycle of a single reaction? That is, if the signal is exactly doubling each cycle, then the efficiency is 100%.
Again, I am not a qPCR guy, so the above may be naive.

--
Phillip
pmiguel is offline   Reply With Quote
Old 06-19-2013, 05:50 AM   #54
DaanV
Junior Member
 
Location: The Netherlands

Join Date: Dec 2012
Posts: 8
Default

Hey Phillip,
You're welcome. It's my pleasure to finally be able to contribute something to SEQanswers.

Your question is valid, and it is indeed one I did pursue at some stage during my internship. In theory you are of course entirely right. And indeed, with more effort it may even be possible to do it (though I've not put in the dedication to see how robust the method is).

In essence, the problem you run into is that the figures aren't logarithmic over the full range of the process. At the early stages, I think this is caused by the lower detection limit of the camera, and this is seen as the values for RFU (Relative Fluorescence Units) fluctuating around 0 (+/-30 or so) for the first bunch of cycles (few cycles for high concentration samples, more for dilutions).

Then at the end of the process the curve flattens again. I suppose this is caused by the reaction running out of nucleotides/primers. Worth a test perhaps, seeing if adding more of either of them increases the maximum value obtained.

These two effects combined result in the characteristic "S" shaped curves that you find with qPCR. Only the truly logarithmic part in between (which typically only lasts for 6-8 cycles) can be used to calculate the efficiency with. The efficiency you find then depends on exactly which cycles you decide to include or exclude from this 'logarithmic phase', which to my tastes becomes a bit too arbitrary and prone to user bias variation.

I hope this clarifies. Of course you're free to pursue the idea, as I'd love to be proven wrong. A quick test on some of my own data indicates that the acquired score for E is at least in the range where I expect it to be.
DaanV is offline   Reply With Quote
Old 06-19-2013, 06:28 AM   #55
pmiguel
Senior Member
 
Location: Purdue University, West Lafayette, Indiana

Join Date: Aug 2008
Posts: 2,317
Default

Quote:
Originally Posted by DaanV View Post
Hey Phillip,
You're welcome. It's my pleasure to finally be able to contribute something to SEQanswers.

Your question is valid, and it is indeed one I did pursue at some stage during my internship. In theory you are of course entirely right. And indeed, with more effort it may even be possible to do it (though I've not put in the dedication to see how robust the method is).

In essence, the problem you run into is that the figures aren't logarithmic over the full range of the process. At the early stages, I think this is caused by the lower detection limit of the camera, and this is seen as the values for RFU (Relative Fluorescence Units) fluctuating around 0 (+/-30 or so) for the first bunch of cycles (few cycles for high concentration samples, more for dilutions).

Then at the end of the process the curve flattens again. I suppose this is caused by the reaction running out of nucleotides/primers. Worth a test perhaps, seeing if adding more of either of them increases the maximum value obtained.

These two effects combined result in the characteristic "S" shaped curves that you find with qPCR. Only the truly logarithmic part in between (which typically only lasts for 6-8 cycles) can be used to calculate the efficiency with. The efficiency you find then depends on exactly which cycles you decide to include or exclude from this 'logarithmic phase', which to my tastes becomes a bit too arbitrary and prone to user bias variation.

I hope this clarifies. Of course you're free to pursue the idea, as I'd love to be proven wrong. A quick test on some of my own data indicates that the acquired score for E is at least in the range where I expect it to be.
Yes, this "S" shaped curve is very familiar to me from a variety of processes. We were even given names for parts of the curve: the initial flat part is called the "lag phase", then the middle log linear part is call the "log phase" and the final flat part, well for bacterial growth anyway is called "stationary phase".

We have a Lifetech (also know as "Applied Biosystems" and "Invitrogen") Step One qPCR machine. It seems to search for an early part of the "log phase" via some algorithm and calls this the "Ct" for "cycle threshold". This may just be another name for one of the parameters you describe above. Anyway, to the extent this is a reasonable prediction of the beginning of "log phase", the efficiency of the reaction calculation maybe correct.

The issue here is just the obvious one -- needing to triple the number of qPCR reactions would likely lead to a substantial increase in our costs. Especially as this instrument has recently turned into quite a bottleneck at times.

Actually we have some aberrant clustering results -- specifically intra-pool -- that we could examine to see if the efficiency metric predicted issues we see.

Again, thanks for your insights. This has been an issue for us for years now. Hopefully this will get us nearer to managing it.

--
Phillip
pmiguel is offline   Reply With Quote
Old 06-19-2013, 07:44 AM   #56
DaanV
Junior Member
 
Location: The Netherlands

Join Date: Dec 2012
Posts: 8
Default

Yes, I'm familiar with the nomenclature of the S-shaped growth curves of microbes. Wasn't sure if I could apply the same names to these though. Log-linear phase seems like as decent a term as any.

Personally I use Bio-Rad CFX Touch and CFX manager. Judging by http://find.lifetechnologies.com/Glo...Update_FLR.pdf this link from Lifetechnologies, the Ct score you mention is the same as (or at least closely related to) Cq I have described above. It is basically the number of cycles after which the sample reaches a pre-determined threshold. The threshold in turn is set at 10x the standard deviation of the baseline.

It may indeed be a good idea to use this value as the start of the logarithmic phase. This would remove half the problem, so that's a good start. The other half of the problem still exists in that you still need to determine the end of the logarithmic phase manually. Which may be quite hard, as competition for primer binding increases gradually during the process, meaning that the measurement is most accurate at early phases of the log-phase (exactly the reason why the threshold for Cq/Ct is placed as low as possible).
DaanV is offline   Reply With Quote
Old 06-19-2013, 08:20 AM   #57
pmiguel
Senior Member
 
Location: Purdue University, West Lafayette, Indiana

Join Date: Aug 2008
Posts: 2,317
Default

Okay, I'll take a look at some data I have. The StepOne software does allow export of data at various levels of "rawness".

I have actually been down this path before. But it felt less like a "path" and more like wilderness for which I did not have a map. Also I had no idea whether the solution was there at all. Now I at least would have some sense that it should be...

--
Phillip

Quote:
Originally Posted by DaanV View Post
Yes, I'm familiar with the nomenclature of the S-shaped growth curves of microbes. Wasn't sure if I could apply the same names to these though. Log-linear phase seems like as decent a term as any.

Personally I use Bio-Rad CFX Touch and CFX manager. Judging by http://find.lifetechnologies.com/Glo...Update_FLR.pdf this link from Lifetechnologies, the Ct score you mention is the same as (or at least closely related to) Cq I have described above. It is basically the number of cycles after which the sample reaches a pre-determined threshold. The threshold in turn is set at 10x the standard deviation of the baseline.

It may indeed be a good idea to use this value as the start of the logarithmic phase. This would remove half the problem, so that's a good start. The other half of the problem still exists in that you still need to determine the end of the logarithmic phase manually. Which may be quite hard, as competition for primer binding increases gradually during the process, meaning that the measurement is most accurate at early phases of the log-phase (exactly the reason why the threshold for Cq/Ct is placed as low as possible).
pmiguel is offline   Reply With Quote
Old 06-19-2013, 09:47 AM   #58
pmiguel
Senior Member
 
Location: Purdue University, West Lafayette, Indiana

Join Date: Aug 2008
Posts: 2,317
Default

Quote:
Originally Posted by DaanV View Post

It may indeed be a good idea to use this value as the start of the logarithmic phase. This would remove half the problem, so that's a good start. The other half of the problem still exists in that you still need to determine the end of the logarithmic phase manually. Which may be quite hard, as competition for primer binding increases gradually during the process, meaning that the measurement is most accurate at early phases of the log-phase (exactly the reason why the threshold for Cq/Ct is placed as low as possible).
From the data I am perusing it looks like the "efficiency" falls above the autothreshold and increases (even past 100%) below the autothreshold. So maybe an extrapolation of the of the "efficiency at the autothreshold"?

--
Phillip
pmiguel is offline   Reply With Quote
Old 06-19-2013, 11:22 AM   #59
pmiguel
Senior Member
 
Location: Purdue University, West Lafayette, Indiana

Join Date: Aug 2008
Posts: 2,317
Default

Or maybe what we want is the Ct and the first derivative of the Ct. That would be the slope at that point. If the slope at the Ct doesn't match that of the standards (nearest standard?), hopefully one could do a correction based on that?
pmiguel is offline   Reply With Quote
Old 06-20-2013, 06:51 AM   #60
Isequencestuff
Member
 
Location: Cambridge

Join Date: Nov 2012
Posts: 21
Default Kapa qPCR help

Thank for all the qPCR insights.

Our lab is having some trouble with running the Kapa SYBR qPCR kit and getting reproducible data. We also use the KAPA illumina standards. It could be an obvious problem, but we can't seem to pin it down. Here's our protocol:

Library Prep
-KAPA/Truseq illumina library construction
-1 ug input for illumina library prep.

qPCR
-KAPA SYBR Fast qPCR kit.
-KAPA illumina standards 1-6.
-library samples in triplicate.
-Kapa illumina standards in triplicate.

-Serially dilute libraries 1:125,000 (1:50, 1:50, 1:50) in 10 mM Tris-HCl pH 8.0 0.05% Tween 20. (The 1:50 dilution is 98 ul + 2 ul library vortex and repeat). We've found that lesser dilutions don't fall within the range of the Kapa illumina standards, and outside of the standard curve. We don't use multichannel for dilutions, only a p100 with a p-10 pipettor.

-Add 6 ul of Kapa SYBR Fast qPCR mix with primers to each well + 4 ul of diluted library/illumina standard (1-6). Should we be running a 20 ul qPCR rxn instead of 10 ul?

-Set up our step one software. Input standards at 6 in triplicate, 20 uM starting concentration and 1:10 standards dilution.

Attached are the resulting quantities of three preps plotted library sample versus pM calculated by the (qPCR mean quantity of each triplicate)*(bp range 452/500)*(dilution 125,000).

Hmm1 & Hmm2 seems to have abnormally high pMs. Normal1 is around the range which I'd expect? Don't mind all of the different pMs but only the average pM's. What are your thoughts?
Attached Images
File Type: jpg Hmm1.JPG (26.8 KB, 32 views)
File Type: jpg Hmm2.JPG (27.3 KB, 18 views)
File Type: jpg Normal1.JPG (24.1 KB, 17 views)
Isequencestuff is offline   Reply With Quote
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off




All times are GMT -8. The time now is 07:54 AM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2021, vBulletin Solutions, Inc.
Single Sign On provided by vBSSO