Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Percent of pre-enrichment TruSeq library molecules replication competent?

    So if you start with pure genomic DNA and do a TruSeq DNA kit reaction on it. Prior to enrichment PCR what % of the molecules will be replication-competent?

    We just started doing qPCRs on all our libraries prior to enrichment PCR. I have not done an exhaustive study of the results, but I think we get results consistent with 10-30% of the molecules being "amplicons", as assayed using qPCR. By this I mean, you calculate your sample concentration by mass (using a BioAnalyzer DNA chip) and compare this to the concentration determined using a Kapa qPCR kit.

    Does anyone else check this? If so, what do you see?

    --
    Phillip

  • #2
    So you mean how many molecules have the correct adapters ligated to each end?

    Comment


    • #3
      Originally posted by GW_OK View Post
      So you mean how many molecules have the correct adapters ligated to each end?
      Not just that -- a molecule could have correct adapters ligated to each end, but have an abasic site, or some other type of damage that would prevent it from being replicated by the polymerase.

      Functionally the test is: number of PCR-competent molecules/ number of DNA molecules. The former is calculated by qPCR, the latter calculated from fluorimetry or an Agilent chip. But both assays performed on the pre-enrichment library!

      --
      Phillip

      Comment


      • #4
        With 10-30% amplifiable then, you've lost 70-90% of your original starting material to either damage or improper ligation? That seems high...

        I'm assuming this is after size selection and such....

        Comment


        • #5
          Originally posted by GW_OK View Post
          With 10-30% amplifiable then, you've lost 70-90% of your original starting material to either damage or improper ligation? That seems high...
          What I am trying to address here is the "seem high" attitude. No one wants 70-90% of their library to be broken, but as long as you get a desired number of reads out of them, you are unlikely to notice. So while it might "seem high", it might be normal.

          Ethan Ford had that protocol that does a qPCR on the pre-PCR libraries to determine how many cycles of amplification to do. We use that, so if others were using it or something similar, they would have the info needed to do such a calculation.

          Originally posted by GW_OK View Post
          I'm assuming this is after size selection and such....
          Well, at this point we only do a lower size selection (AmPure). But next step is enrichment PCR, if that helps.

          --
          Phillip

          Comment


          • #6
            I just wanted to see if you weren't also quantifying adapter dimers, concatamers, stuff like that.

            Hm, though, 90% of your sample won't PCR? What happened to it betwixt the sample submission and this pre-enrichment PCR? Where did this damage arise? I mean, we know some DNA doesn't make it through the library prep process, but I'd always attributed it to the loss in the cleanup steps.
            And I mean, it's not just getting the number of reads, you've got to have enough to dilute properly to get on the cBot.

            Comment


            • #7
              Originally posted by GW_OK View Post
              I just wanted to see if you weren't also quantifying adapter dimers, concatamers, stuff like that.

              Hm, though, 90% of your sample won't PCR? What happened to it betwixt the sample submission and this pre-enrichment PCR? Where did this damage arise? I mean, we know some DNA doesn't make it through the library prep process, but I'd always attributed it to the loss in the cleanup steps.
              What reason do you have to think that >10% of a typical genomic DNA prep can be replicated by a polymerase?

              Originally posted by GW_OK View Post
              And I mean, it's not just getting the number of reads, you've got to have enough to dilute properly to get on the cBot.
              No, that happens after enrichment PCR. Not sure how many cycles of enrichment PCR you do -- for 10 cycles you would have a 1000x increase in the number of amplifiable library molecules.

              But even if you went with very limited (or no) enrichment PCR, would you expect to be short on library? If you start with 1 ug of genomic DNA, that would be 40 trillion 250 bp fragments. Say the library construction protocol nets you 10% but 90% of those won't amplify for whatever reasons. You are down to 400 billion "amplifiable" molecules. What does that work out to? Maybe 14 nM if you have the library in 50 ul, right? You would be happy with that, right?

              --
              Phillip

              Comment


              • #8
                Originally posted by pmiguel View Post
                What reason do you have to think that >10% of a typical genomic DNA prep can be replicated by a polymerase?
                Well why wouldn't it? I admit I don't know the efficiencies of each enzymatic step in the process but is there any reason to not think a good percentage of molecules will make it through each step?

                Originally posted by pmiguel View Post
                No, that happens after enrichment PCR. Not sure how many cycles of enrichment PCR you do -- for 10 cycles you would have a 1000x increase in the number of amplifiable library molecules.
                We use 10 as well, maybe it's possible to extrapolate back the original concentration of DNA from the post PCR quantification...

                Originally posted by pmiguel View Post
                But even if you went with very limited (or no) enrichment PCR, would you expect to be short on library? If you start with 1 ug of genomic DNA, that would be 40 trillion 250 bp fragments. Say the library construction protocol nets you 10% but 90% of those won't amplify for whatever reasons. You are down to 400 billion "amplifiable" molecules. What does that work out to? Maybe 14 nM if you have the library in 50 ul, right? You would be happy with that, right?
                If it's enough to load then I'm happy, usually. But what I'm still not understanding is why 70-90% of the sample won't amplify prior to PCR enrichment, which is what you're claiming. What has occurred to it? Ligation efficiencies I can understand, but are there a preponderance of abasic sites in a raw genomic extraction? Hairpins? What?

                Edit:
                Also, have you tried quantifying the nucleic acid by weight in something other than a bioanalyzer?
                Last edited by GW_OK; 06-07-2012, 10:24 AM.

                Comment


                • #9
                  Originally posted by GW_OK View Post
                  Well why wouldn't it? I admit I don't know the efficiencies of each enzymatic step in the process but is there any reason to not think a good percentage of molecules will make it through each step?
                  My point is that we have no basis to believe that one or the other is true. That is, if 90% of genomic DNA fragments cannot be replicated by a polymerase (especially a non-proofreading pol, like Taq), would you even notice? If you would notice, then how? Specifically, as in what assay do you do that would address this question?

                  Originally posted by GW_OK View Post
                  We use 10 as well, maybe it's possible to extrapolate back the original concentration of DNA from the post PCR quantification...
                  I doubt it -- the reaction is probably past log phase after 10 cycles. But you could set your theoretical maximum at 1024x if your DNA doubles in amount each cycle.

                  Originally posted by GW_OK View Post
                  If it's enough to load then I'm happy, usually. But what I'm still not understanding is why 70-90% of the sample won't amplify prior to PCR enrichment, which is what you're claiming. What has occurred to it? Ligation efficiencies I can understand, but are there a preponderance of abasic sites in a raw genomic extraction? Hairpins? What?
                  Well, it could be any of those things or lots of others as well. I am not saying that genomic DNA always has lots of damage of this type -- just that it could and we would not even notice.

                  Originally posted by GW_OK View Post
                  Edit:
                  Also, have you tried quantifying the nucleic acid by weight in something other than a bioanalyzer?
                  Not recently. In the past fluorimetric and bioanalyzer readings gave vaguely concordant results. But past that, not really. If you want to hypothesis that either the qPCR or the bioanalyzer (or both) are not accurate, I would admit that is a possibility.

                  --
                  Phillip

                  Comment


                  • #10
                    Could the low number of "amplifiable" molecules pre-enrichment PCR be due to unrepairable ends after fragmentation of the DNA? I have always assumed that mechanical shearing of DNA leaves ends that are unrepairable, ie the strand breaks so it doesn't leave a 5' PO4 and is not "kinasable", which would prevent enzymatic repair of the strand. 3' ends are a little more forgiving since T4 DNAl pol could chew back to give a repairable end. Maybe enzymatic fragmentation (fragmentase and the like) would give more efficient ligation / higher fraction of amplifiable molecules pre-enrichment PCR?

                    Comment


                    • #11
                      Originally posted by bbeitzel View Post
                      Could the low number of "amplifiable" molecules pre-enrichment PCR be due to unrepairable ends after fragmentation of the DNA? I have always assumed that mechanical shearing of DNA leaves ends that are unrepairable, ie the strand breaks so it doesn't leave a 5' PO4 and is not "kinasable", which would prevent enzymatic repair of the strand. 3' ends are a little more forgiving since T4 DNAl pol could chew back to give a repairable end. Maybe enzymatic fragmentation (fragmentase and the like) would give more efficient ligation / higher fraction of amplifiable molecules pre-enrichment PCR?
                      Sure that is a possibility. But really, I don't need more amplifiable molecules. 1 billion of them should be plenty for nearly any purpose and in most cases we end up with lots more than that. 1 billion molecules would be sub-1 ng of library prior to enrichment PCR.

                      Nevertheless I wonder what "normal" is.

                      But I do take what you write seriously. I have addressed it a little myself here. The answer is probably most, if not all, of the ends should be "repairable" via the typical T4poly/T4PNK end repair methods. Further, this is testable -- take a little of your repaired (but not polyadenylated) fragmented RNA, subject it to ligation. Run pre and post ligation on a gel. If most of the DNA ends up at high molecular weights, then you know your ends are not the issue. This post is a data point suggesting most ends must be ligatable, at least in that poster's hands.

                      --
                      Phillip

                      Comment

                      Latest Articles

                      Collapse

                      • seqadmin
                        Current Approaches to Protein Sequencing
                        by seqadmin


                        Proteins are often described as the workhorses of the cell, and identifying their sequences is key to understanding their role in biological processes and disease. Currently, the most common technique used to determine protein sequences is mass spectrometry. While still a valuable tool, mass spectrometry faces several limitations and requires a highly experienced scientist familiar with the equipment to operate it. Additionally, other proteomic methods, like affinity assays, are constrained...
                        04-04-2024, 04:25 PM
                      • seqadmin
                        Strategies for Sequencing Challenging Samples
                        by seqadmin


                        Despite advancements in sequencing platforms and related sample preparation technologies, certain sample types continue to present significant challenges that can compromise sequencing results. Pedro Echave, Senior Manager of the Global Business Segment at Revvity, explained that the success of a sequencing experiment ultimately depends on the amount and integrity of the nucleic acid template (RNA or DNA) obtained from a sample. “The better the quality of the nucleic acid isolated...
                        03-22-2024, 06:39 AM

                      ad_right_rmr

                      Collapse

                      News

                      Collapse

                      Topics Statistics Last Post
                      Started by seqadmin, 04-11-2024, 12:08 PM
                      0 responses
                      30 views
                      0 likes
                      Last Post seqadmin  
                      Started by seqadmin, 04-10-2024, 10:19 PM
                      0 responses
                      32 views
                      0 likes
                      Last Post seqadmin  
                      Started by seqadmin, 04-10-2024, 09:21 AM
                      0 responses
                      28 views
                      0 likes
                      Last Post seqadmin  
                      Started by seqadmin, 04-04-2024, 09:00 AM
                      0 responses
                      52 views
                      0 likes
                      Last Post seqadmin  
                      Working...
                      X