Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    I just heard that the original Nextera enzyme from Epicentre gave nice peaks, but Illumina's version is not as good. Not that this helps, but at least explains why the Nextera kit is more difficult than promised.

    Comment


    • #17
      In case anyone is interested, here's a quick update on the results from our sequencing:

      -We sequenced two pools of libraries in two lanes of PE 100bp HiSeq, with one yielding 100x2 million reads and the other yielding 180x2 million reads.

      -I analyzed 1 lane using bwa sampe -a 2000 to allow insert sizes up to 2kb to be properly paired. From Picard's InsertSizeMetrics, the median insert size is 194bp (see attached). It seems to me that the clustering and/or sequencing step greatly biases towards recovery of the shorter fragments even though the Bioanalyzer is finding a peak size of ~1kb.

      We're very pleased with the results and will continue to use Nextera.
      Attached Files

      Comment


      • #18
        Originally posted by pjuneja View Post
        In case anyone is interested, here's a quick update on the results from our sequencing:

        -We sequenced two pools of libraries in two lanes of PE 100bp HiSeq, with one yielding 100x2 million reads and the other yielding 180x2 million reads.

        -I analyzed 1 lane using bwa sampe -a 2000 to allow insert sizes up to 2kb to be properly paired. From Picard's InsertSizeMetrics, the median insert size is 194bp (see attached). It seems to me that the clustering and/or sequencing step greatly biases towards recovery of the shorter fragments even though the Bioanalyzer is finding a peak size of ~1kb.

        We're very pleased with the results and will continue to use Nextera.
        Yes, we see this as well for Nextera and other methods. It suggests there is some sort of direct competition during clustering that strongly favors the creation of clusters of shorter amplicons.

        --
        Phillip

        Comment


        • #19
          I have just started doing Nextera DNA library preps and I am getting the same larger than expected peaks (1000-2000 bp) with some bimodality too. I am attaching a picture of the last 11 libraries I ran on the Bioanalyzer HS Chip - this is post-PCR, we did not run the libraries post-tagmentation (pre-PCR).

          Extractions were done with the Qiagen Blood and Tissue kit and we included an RNAse treatment and used a buffer without EDTA (EB Buffer).

          I was wondering if I should try lowering the input material to 30 ng as a test (all libraries shown had a range from 41 to 51 ng, but there is no pattern that corresponds with this). Also wondered about trying a longer tagmentation step... but I suspect I will just get more smaller fragments, still with the large peak around 1000-2000 bp.

          I am wondering if the bimodality is just an insertion preference bias of the transposome, in which case I guess I can't do anything! Seems that Nextera is highly variable...

          Does any one with previous experience think that my libraries will still sequence ok (100 base paired-end sequence on the HiSeq), despite the large peak and some bimodality? How do you optimize the cluster density with bimodal distributions?
          Attached Files

          Comment


          • #20
            Nextera insert sizes

            We have had very similar Bioanalyzer traces in the past, but now routinely get unimodal peaks with 400-1000 bp average size. Here are some things we believe are important for optimum results.

            1) DNA must be accurately quantified and diluted so that exactly 50 ng is used in the tagmentation reaction. All dilutions of DNA should be done with Tris buffer containing 0.05% Tween 20. DNA at low concentrations can stick to the plasticware, while DNA (especially genomic) at high concentrations can give inaccurate pipetting because of the viscosity. Your variable Bioanalyzer traces indicate too much tagmentation due to a variable and inadequate amount of DNA used in the reactions.

            2) Be wary of N501 and possibly other combinations of i7 and i5 bar-coded primers. Use the i7 indices with N505 for the most reliable results. You can order N505 from any oligo supplier and dilute it to 0.5 micromolar.

            3) Increase the number of PCR cycles from five to eight and decrease the extension time from three to two min.

            4) Be extra careful with the Ampure cleanup to avoid getting fragments less than 300 bp. We add 29 instead of 30 ul. The MW cut off is very sensitive to the ratio of beads to PCR reaction.

            5) At least for genomic sequencing, we don't think fragments >1 kb in a Nextera library are a problem. However, do make sure they are included in the average size calculation, because this will significantly impact the concentration of that library in the pool.

            If anyone else has tips to add to this list, please do. We are still looking to optimize the process. We typically get cluster densities of ~1200 K/mm2, which appears to be close to the optimum, but by flying so close to the max, we occasionally overshoot and the MiSeq can't resolve the clusters. There are many parameters involved in hitting the sweet spot and we still don't have it under full control.

            Comment


            • #21
              I have not used Nextera at all, but I was intrigued by the comment that gDNA must be accurately quantified so that the tagmentation procedure works properly.
              I have had a problem with accurately quantifying genomic DNA. In my opinion, the Qubit is not fit to do this, there is a big difference between nanodrop and qubit measurements, and even with repeats it all seems a bit random. Has anybody out there got a similar experience? Could that perhaps be the reason why the amount of input DNA is overestimated?

              Comment


              • #22
                Originally posted by mboth View Post
                I have not used Nextera at all, but I was intrigued by the comment that gDNA must be accurately quantified so that the tagmentation procedure works properly.
                I have had a problem with accurately quantifying genomic DNA. In my opinion, the Qubit is not fit to do this, there is a big difference between nanodrop and qubit measurements, and even with repeats it all seems a bit random. Has anybody out there got a similar experience? Could that perhaps be the reason why the amount of input DNA is overestimated?
                Nanodrop uses A260/A280 A260/A230 which tends to be wildly inaccurate due to frequent and persistent polysaccharide contamination in many DNA samples. Qubit is also thrown off by high polysaccharide concentrations which lead to partially insoluble polysaccharide/DNA masses that are unevenly distributed through the solution. This leads to problems with irreproducible pipetting. It is best to dilute DNA solutions down to a point where the cloudiness caused by polysaccharides is barely visible.

                Comment


                • #23
                  Originally posted by mboth View Post
                  I have not used Nextera at all, but I was intrigued by the comment that gDNA must be accurately quantified so that the tagmentation procedure works properly.
                  I have had a problem with accurately quantifying genomic DNA. In my opinion, the Qubit is not fit to do this, there is a big difference between nanodrop and qubit measurements, and even with repeats it all seems a bit random. Has anybody out there got a similar experience? Could that perhaps be the reason why the amount of input DNA is overestimated?
                  No idea why you draw the conclusion that the qubit is not fit to estimate the concentration of genomic DNA. Other than running a gel with good mass standards, fluorimetry is pretty much the only way to get a sane estimation of concentration of genomic DNA in a genomic DNA prep.

                  Yes, nanodrop UV spectrophotometry is normally a poor method to estimate the concetration of genomic DNA in a prep. I discuss here some of the reasons why.

                  --
                  Phillip

                  Comment


                  • #24
                    Originally posted by drdna View Post
                    Nanodrop uses A260/A280 A260/A230 which tends to be wildly inaccurate due to frequent and persistent polysaccharide contamination in many DNA samples. Qubit is also thrown off by high polysaccharide concentrations which lead to partially insoluble polysaccharide/DNA masses that are unevenly distributed through the solution. This leads to problems with irreproducible pipetting. It is best to dilute DNA solutions down to a point where the cloudiness caused by polysaccharides is barely visible.
                    The Nanodrop spectrophotometer does not use ratios to estimate DNA concentrations -- it pretty much just uses the absorbance at 260nm. (There is a caveat here, because apparently it subtracts background that it determines from a wavelength in the visible part of the spectrum.)

                    I think you can largely remove the insoluble glop that one finds in some DNA preps by giving them a hard spin and pippetting off the supernatant into a separate tube. The supe is the DNA and the pellet is mostly insoluble stuff (possibly polysaccharides.)

                    By "hard", I mean "hard" -- like >10 minutes at >10K RPM in a microfuge hard. I mention this only because there is a maddening tendency for people to regard all centrifuges as equal -- that 100RPM spin you get from a cheap "touch spin" centrifuge is *not* the same as a "hard" spin.

                    By the way, while this method should give a sample that can be assayed for concentration on a fluorimeter, don't get the idea that a UV spectrophotometer will also do the job. It probably won't. I detail some of the reasons why here.

                    --
                    Phillip
                    Last edited by pmiguel; 07-08-2013, 05:26 AM.

                    Comment


                    • #25
                      I have been making my very first Nextera library. I accidently added double the amount of tagmentation enzyme to half my 96 well plate (columns 1-6, as seen by the first 6 lanes in the attached image), did 8 cycles of PCR as recommended by others and looked at my libraries on the bioanalyser. Those which had twice as much enzyme are of a much better size range (lanes 7-11), whereas those that had the right amount are all alot bigger. Although to get a nice size range it is not ideal to add more of the very expensive enzyme.

                      I am guessing I will have to make separate pools for sequencing based on the different size ranges - does that sound right?
                      Attached Files

                      Comment


                      • #26
                        Originally posted by creeves View Post
                        We have had very similar Bioanalyzer traces in the past, but now routinely get unimodal peaks with 400-1000 bp average size. Here are some things we believe are important for optimum results.

                        1) DNA must be accurately quantified and diluted so that exactly 50 ng is used in the tagmentation reaction. All dilutions of DNA should be done with Tris buffer containing 0.05% Tween 20. DNA at low concentrations can stick to the plasticware, while DNA (especially genomic) at high concentrations can give inaccurate pipetting because of the viscosity. Your variable Bioanalyzer traces indicate too much tagmentation due to a variable and inadequate amount of DNA used in the reactions.

                        2) Be wary of N501 and possibly other combinations of i7 and i5 bar-coded primers. Use the i7 indices with N505 for the most reliable results. You can order N505 from any oligo supplier and dilute it to 0.5 micromolar.

                        3) Increase the number of PCR cycles from five to eight and decrease the extension time from three to two min.

                        4) Be extra careful with the Ampure cleanup to avoid getting fragments less than 300 bp. We add 29 instead of 30 ul. The MW cut off is very sensitive to the ratio of beads to PCR reaction.

                        5) At least for genomic sequencing, we don't think fragments >1 kb in a Nextera library are a problem. However, do make sure they are included in the average size calculation, because this will significantly impact the concentration of that library in the pool.

                        If anyone else has tips to add to this list, please do. We are still looking to optimize the process. We typically get cluster densities of ~1200 K/mm2, which appears to be close to the optimum, but by flying so close to the max, we occasionally overshoot and the MiSeq can't resolve the clusters. There are many parameters involved in hitting the sweet spot and we still don't have it under full control.
                        5) At least for genomic sequencing, we don't think fragments >1 kb in a Nextera library are a problem. However, do make sure they are included in the average size calculation, because this will significantly impact the concentration of that library in the pool.
                        Hi, creeves and everyone!
                        I'm having a few issues with Nextera libraries too. I'm working on plant gDNA and my inserts are >1Kb. I used Kapa qPCR kit to quantify my libraries, but in this point I have a problem with the 5) suggestion. According with Kapa kit protocol, DNA polymerase amplifies up to 1Kb (considering the 90s annealing-extension). So, it would be incorrect to include fragments above 1Kb, because they're not quantified during qPCR and are not represented in the final concentration.
                        I would try to use your method (including all sizes), because I tried on my own and used an input of 20pM for MiSeq, and got a clustering of ~400K/mm2. But I can't since Kapa qPCR doesn't quantify fragments >1Kb! I'd appreciate any advices regarding this!
                        Thanks in advance!
                        Science is ok, but I'm hungry.

                        Comment


                        • #27
                          I would suggest using standard KAPA qPCR protocol as you have described. For average size calculation use region from 100-950 bp and cluster 1-2pM less than your optimum shotgun gDNA libraries to start. You can slightly adjust the pM input for next runs to get optimum cluster number for the specific chemistry with this method.

                          Comment


                          • #28
                            I just wanted to point out that the bioanalyser trace is more than a little misleading with regard to the fragment length distribution.

                            The important thing is that the Bioanalyser trace is on a log scale (like a normal agarose gel). The size distribution of reads (as reported earlier in this thread on post #17) were plotted on a linear axes.

                            From my understanding, the size distribution of any library, irrespective of methods of fragmenting, (if it is a random process), should always display an exponential decay curve. (It is a Poisson process.) The mode should always be below the mean, and the median above the mean.

                            The log-nature of DNA migration through agarose gels (and the Bioanalyser) will essentially appear to cancel out these inherent exponential characteristics.

                            When companies state that their favourite shearing method creates a tight size distribution, I think they are probably full of it.

                            Disclaimer: [This is assuming that for any chosen method the fragmentation events are randomly distributed relative to one another.]

                            p.s. I know this is an old thread, but I was browsing for something, and thought it worth mentioning.
                            Last edited by M4TTN; 10-31-2014, 04:42 AM.

                            Comment


                            • #29
                              I am having a similar problem with my experiment so I hope someone can give me a clue.

                              I'm making Nextera XT libraries of some phago genomic DNA samples following protocol instructions. Samples were previously diluted in water, quantified with Qubit and ratio was around 1.7-1.8 with Nanoddrop.

                              After running my libraries on a Bioanalyzer HS DNA chip, I observed small quantity for most of DNA and peaks over the 300-500 bp of expected size, usually around 1300bp as you can see in the Recopilation image file.

                              As I expected smaller size of fragments, I made Sample 4 again with the following modification in order to get shorter size fragments:

                              Sample 4: Positive control. Made according protocol instructions (5 minutes in thermocycler and 5ul of Tagment DNA Buffer).

                              Sample 4 + min: Instead of 5 minutes in the thermocycler I put it 10 in order to give it more time to tagment my DNA.

                              I supposed I should have obtained smaller fragment sizes because the enzyme would have had more time to tag fragments and thus getting smaller sizes.

                              My result was even worse as you can check in the ReRun image file.

                              As you can see, my control sample obtained right now smaller fragment size than before (1130 vs 1670) which has no sense, but my “experiment sample” obtained even larger fragment sizes (1709 bp).

                              Q1. Why did I get bigger fragment sizes?

                              Q2. According to previous posts, maybe decreasing the extension time for the PCR and adding some extra cycles could be a solution. Do you think this applies also to Nextera XT? Any other advice?

                              Thanks in advance
                              Attached Files

                              Comment


                              • #30
                                Nextera library sizes

                                A library with average size up to 2000 is no problem. See our just published paper in ACS Synthetic Biology for some tips. The length of time of tagmentation is not important as long as it goes to completion, i.e. all the transposomes have tagmented the DNA sample. Tagmentation is stoichiometric, not catalytic The amount of DNA going into the tagmentation reaction is critical. Too much DNA will give fragments too large to be amplified and too little DNA will give small fragments that will be lost during SPRI. If you are using the Nextera XT kit and protocol, the extension time should be fine. Most likely you need to quantify and dilute your DNA more carefully.

                                Comment

                                Latest Articles

                                Collapse

                                • seqadmin
                                  Strategies for Sequencing Challenging Samples
                                  by seqadmin


                                  Despite advancements in sequencing platforms and related sample preparation technologies, certain sample types continue to present significant challenges that can compromise sequencing results. Pedro Echave, Senior Manager of the Global Business Segment at Revvity, explained that the success of a sequencing experiment ultimately depends on the amount and integrity of the nucleic acid template (RNA or DNA) obtained from a sample. “The better the quality of the nucleic acid isolated...
                                  03-22-2024, 06:39 AM
                                • seqadmin
                                  Techniques and Challenges in Conservation Genomics
                                  by seqadmin



                                  The field of conservation genomics centers on applying genomics technologies in support of conservation efforts and the preservation of biodiversity. This article features interviews with two researchers who showcase their innovative work and highlight the current state and future of conservation genomics.

                                  Avian Conservation
                                  Matthew DeSaix, a recent doctoral graduate from Kristen Ruegg’s lab at The University of Colorado, shared that most of his research...
                                  03-08-2024, 10:41 AM

                                ad_right_rmr

                                Collapse

                                News

                                Collapse

                                Topics Statistics Last Post
                                Started by seqadmin, Yesterday, 06:37 PM
                                0 responses
                                10 views
                                0 likes
                                Last Post seqadmin  
                                Started by seqadmin, Yesterday, 06:07 PM
                                0 responses
                                9 views
                                0 likes
                                Last Post seqadmin  
                                Started by seqadmin, 03-22-2024, 10:03 AM
                                0 responses
                                51 views
                                0 likes
                                Last Post seqadmin  
                                Started by seqadmin, 03-21-2024, 07:32 AM
                                0 responses
                                67 views
                                0 likes
                                Last Post seqadmin  
                                Working...
                                X