SEQanswers

Go Back   SEQanswers > Applications Forums > Sample Prep / Library Generation



Similar Threads
Thread Thread Starter Forum Replies Last Post
Insert Sizes for Paired End Reads Exactly the same as Read Length rlowe Bioinformatics 0 06-27-2012 04:01 AM
longest possible insert length & variable insert lengths lcarey Illumina/Solexa 0 06-12-2012 11:05 PM
About Insert, Insert size and MIRA mates.file aarthi.talla 454 Pyrosequencing 1 08-01-2011 01:37 PM
insert sizes nozzer Bioinformatics 1 07-09-2010 05:49 AM

Reply
 
Thread Tools
Old 06-20-2013, 03:46 AM   #21
mboth
Member
 
Location: UK

Join Date: Oct 2010
Posts: 20
Default

I have not used Nextera at all, but I was intrigued by the comment that gDNA must be accurately quantified so that the tagmentation procedure works properly.
I have had a problem with accurately quantifying genomic DNA. In my opinion, the Qubit is not fit to do this, there is a big difference between nanodrop and qubit measurements, and even with repeats it all seems a bit random. Has anybody out there got a similar experience? Could that perhaps be the reason why the amount of input DNA is overestimated?
mboth is offline   Reply With Quote
Old 07-05-2013, 02:25 PM   #22
drdna
Member
 
Location: Kentucky

Join Date: May 2012
Posts: 71
Default

Quote:
Originally Posted by mboth View Post
I have not used Nextera at all, but I was intrigued by the comment that gDNA must be accurately quantified so that the tagmentation procedure works properly.
I have had a problem with accurately quantifying genomic DNA. In my opinion, the Qubit is not fit to do this, there is a big difference between nanodrop and qubit measurements, and even with repeats it all seems a bit random. Has anybody out there got a similar experience? Could that perhaps be the reason why the amount of input DNA is overestimated?
Nanodrop uses A260/A280 A260/A230 which tends to be wildly inaccurate due to frequent and persistent polysaccharide contamination in many DNA samples. Qubit is also thrown off by high polysaccharide concentrations which lead to partially insoluble polysaccharide/DNA masses that are unevenly distributed through the solution. This leads to problems with irreproducible pipetting. It is best to dilute DNA solutions down to a point where the cloudiness caused by polysaccharides is barely visible.
drdna is offline   Reply With Quote
Old 07-08-2013, 05:14 AM   #23
pmiguel
Senior Member
 
Location: Purdue University, West Lafayette, Indiana

Join Date: Aug 2008
Posts: 2,291
Default

Quote:
Originally Posted by mboth View Post
I have not used Nextera at all, but I was intrigued by the comment that gDNA must be accurately quantified so that the tagmentation procedure works properly.
I have had a problem with accurately quantifying genomic DNA. In my opinion, the Qubit is not fit to do this, there is a big difference between nanodrop and qubit measurements, and even with repeats it all seems a bit random. Has anybody out there got a similar experience? Could that perhaps be the reason why the amount of input DNA is overestimated?
No idea why you draw the conclusion that the qubit is not fit to estimate the concentration of genomic DNA. Other than running a gel with good mass standards, fluorimetry is pretty much the only way to get a sane estimation of concentration of genomic DNA in a genomic DNA prep.

Yes, nanodrop UV spectrophotometry is normally a poor method to estimate the concetration of genomic DNA in a prep. I discuss here some of the reasons why.

--
Phillip
pmiguel is offline   Reply With Quote
Old 07-08-2013, 05:24 AM   #24
pmiguel
Senior Member
 
Location: Purdue University, West Lafayette, Indiana

Join Date: Aug 2008
Posts: 2,291
Default

Quote:
Originally Posted by drdna View Post
Nanodrop uses A260/A280 A260/A230 which tends to be wildly inaccurate due to frequent and persistent polysaccharide contamination in many DNA samples. Qubit is also thrown off by high polysaccharide concentrations which lead to partially insoluble polysaccharide/DNA masses that are unevenly distributed through the solution. This leads to problems with irreproducible pipetting. It is best to dilute DNA solutions down to a point where the cloudiness caused by polysaccharides is barely visible.
The Nanodrop spectrophotometer does not use ratios to estimate DNA concentrations -- it pretty much just uses the absorbance at 260nm. (There is a caveat here, because apparently it subtracts background that it determines from a wavelength in the visible part of the spectrum.)

I think you can largely remove the insoluble glop that one finds in some DNA preps by giving them a hard spin and pippetting off the supernatant into a separate tube. The supe is the DNA and the pellet is mostly insoluble stuff (possibly polysaccharides.)

By "hard", I mean "hard" -- like >10 minutes at >10K RPM in a microfuge hard. I mention this only because there is a maddening tendency for people to regard all centrifuges as equal -- that 100RPM spin you get from a cheap "touch spin" centrifuge is *not* the same as a "hard" spin.

By the way, while this method should give a sample that can be assayed for concentration on a fluorimeter, don't get the idea that a UV spectrophotometer will also do the job. It probably won't. I detail some of the reasons why here.

--
Phillip

Last edited by pmiguel; 07-08-2013 at 05:26 AM.
pmiguel is offline   Reply With Quote
Old 11-05-2013, 06:58 PM   #25
cazzb
Junior Member
 
Location: Melbourne

Join Date: Nov 2013
Posts: 1
Default

I have been making my very first Nextera library. I accidently added double the amount of tagmentation enzyme to half my 96 well plate (columns 1-6, as seen by the first 6 lanes in the attached image), did 8 cycles of PCR as recommended by others and looked at my libraries on the bioanalyser. Those which had twice as much enzyme are of a much better size range (lanes 7-11), whereas those that had the right amount are all alot bigger. Although to get a nice size range it is not ideal to add more of the very expensive enzyme.

I am guessing I will have to make separate pools for sequencing based on the different size ranges - does that sound right?
Attached Images
File Type: jpg Nextera libraries.jpg (60.9 KB, 175 views)
cazzb is offline   Reply With Quote
Old 08-18-2014, 09:12 AM   #26
ArciMol
Junior Member
 
Location: Chile

Join Date: Apr 2014
Posts: 8
Default

Quote:
Originally Posted by creeves View Post
We have had very similar Bioanalyzer traces in the past, but now routinely get unimodal peaks with 400-1000 bp average size. Here are some things we believe are important for optimum results.

1) DNA must be accurately quantified and diluted so that exactly 50 ng is used in the tagmentation reaction. All dilutions of DNA should be done with Tris buffer containing 0.05% Tween 20. DNA at low concentrations can stick to the plasticware, while DNA (especially genomic) at high concentrations can give inaccurate pipetting because of the viscosity. Your variable Bioanalyzer traces indicate too much tagmentation due to a variable and inadequate amount of DNA used in the reactions.

2) Be wary of N501 and possibly other combinations of i7 and i5 bar-coded primers. Use the i7 indices with N505 for the most reliable results. You can order N505 from any oligo supplier and dilute it to 0.5 micromolar.

3) Increase the number of PCR cycles from five to eight and decrease the extension time from three to two min.

4) Be extra careful with the Ampure cleanup to avoid getting fragments less than 300 bp. We add 29 instead of 30 ul. The MW cut off is very sensitive to the ratio of beads to PCR reaction.

5) At least for genomic sequencing, we don't think fragments >1 kb in a Nextera library are a problem. However, do make sure they are included in the average size calculation, because this will significantly impact the concentration of that library in the pool.

If anyone else has tips to add to this list, please do. We are still looking to optimize the process. We typically get cluster densities of ~1200 K/mm2, which appears to be close to the optimum, but by flying so close to the max, we occasionally overshoot and the MiSeq can't resolve the clusters. There are many parameters involved in hitting the sweet spot and we still don't have it under full control.
Quote:
5) At least for genomic sequencing, we don't think fragments >1 kb in a Nextera library are a problem. However, do make sure they are included in the average size calculation, because this will significantly impact the concentration of that library in the pool.
Hi, creeves and everyone!
I'm having a few issues with Nextera libraries too. I'm working on plant gDNA and my inserts are >1Kb. I used Kapa qPCR kit to quantify my libraries, but in this point I have a problem with the 5) suggestion. According with Kapa kit protocol, DNA polymerase amplifies up to 1Kb (considering the 90s annealing-extension). So, it would be incorrect to include fragments above 1Kb, because they're not quantified during qPCR and are not represented in the final concentration.
I would try to use your method (including all sizes), because I tried on my own and used an input of 20pM for MiSeq, and got a clustering of ~400K/mm2. But I can't since Kapa qPCR doesn't quantify fragments >1Kb! I'd appreciate any advices regarding this!
Thanks in advance!
__________________
Science is ok, but I'm hungry.
ArciMol is offline   Reply With Quote
Old 08-18-2014, 03:48 PM   #27
nucacidhunter
Jafar Jabbari
 
Location: Melbourne

Join Date: Jan 2013
Posts: 1,179
Default

I would suggest using standard KAPA qPCR protocol as you have described. For average size calculation use region from 100-950 bp and cluster 1-2pM less than your optimum shotgun gDNA libraries to start. You can slightly adjust the pM input for next runs to get optimum cluster number for the specific chemistry with this method.
nucacidhunter is offline   Reply With Quote
Old 10-31-2014, 04:38 AM   #28
M4TTN
Member
 
Location: UK

Join Date: Jan 2014
Posts: 74
Default

I just wanted to point out that the bioanalyser trace is more than a little misleading with regard to the fragment length distribution.

The important thing is that the Bioanalyser trace is on a log scale (like a normal agarose gel). The size distribution of reads (as reported earlier in this thread on post #17) were plotted on a linear axes.

From my understanding, the size distribution of any library, irrespective of methods of fragmenting, (if it is a random process), should always display an exponential decay curve. (It is a Poisson process.) The mode should always be below the mean, and the median above the mean.

The log-nature of DNA migration through agarose gels (and the Bioanalyser) will essentially appear to cancel out these inherent exponential characteristics.

When companies state that their favourite shearing method creates a tight size distribution, I think they are probably full of it.

Disclaimer: [This is assuming that for any chosen method the fragmentation events are randomly distributed relative to one another.]

p.s. I know this is an old thread, but I was browsing for something, and thought it worth mentioning.

Last edited by M4TTN; 10-31-2014 at 04:42 AM.
M4TTN is offline   Reply With Quote
Old 06-09-2015, 04:51 AM   #29
chariko
Member
 
Location: Spain

Join Date: Jun 2010
Posts: 56
Default

I am having a similar problem with my experiment so I hope someone can give me a clue.

I'm making Nextera XT libraries of some phago genomic DNA samples following protocol instructions. Samples were previously diluted in water, quantified with Qubit and ratio was around 1.7-1.8 with Nanoddrop.

After running my libraries on a Bioanalyzer HS DNA chip, I observed small quantity for most of DNA and peaks over the 300-500 bp of expected size, usually around 1300bp as you can see in the Recopilation image file.

As I expected smaller size of fragments, I made Sample 4 again with the following modification in order to get shorter size fragments:

Sample 4: Positive control. Made according protocol instructions (5 minutes in thermocycler and 5ul of Tagment DNA Buffer).

Sample 4 + min: Instead of 5 minutes in the thermocycler I put it 10 in order to give it more time to tagment my DNA.

I supposed I should have obtained smaller fragment sizes because the enzyme would have had more time to tag fragments and thus getting smaller sizes.

My result was even worse as you can check in the ReRun image file.

As you can see, my control sample obtained right now smaller fragment size than before (1130 vs 1670) which has no sense, but my “experiment sample” obtained even larger fragment sizes (1709 bp).

Q1. Why did I get bigger fragment sizes?

Q2. According to previous posts, maybe decreasing the extension time for the PCR and adding some extra cycles could be a solution. Do you think this applies also to Nextera XT? Any other advice?

Thanks in advance
Attached Images
File Type: jpg Recopilation.jpg (86.1 KB, 62 views)
File Type: png ReRun.png (116.2 KB, 42 views)
chariko is offline   Reply With Quote
Old 06-09-2015, 06:32 AM   #30
creeves
Member
 
Location: East Bay

Join Date: Jul 2012
Posts: 26
Default Nextera library sizes

A library with average size up to 2000 is no problem. See our just published paper in ACS Synthetic Biology for some tips. The length of time of tagmentation is not important as long as it goes to completion, i.e. all the transposomes have tagmented the DNA sample. Tagmentation is stoichiometric, not catalytic The amount of DNA going into the tagmentation reaction is critical. Too much DNA will give fragments too large to be amplified and too little DNA will give small fragments that will be lost during SPRI. If you are using the Nextera XT kit and protocol, the extension time should be fine. Most likely you need to quantify and dilute your DNA more carefully.
creeves is offline   Reply With Quote
Old 06-11-2015, 01:26 AM   #31
chariko
Member
 
Location: Spain

Join Date: Jun 2010
Posts: 56
Default

Quote:
Originally Posted by creeves View Post
A library with average size up to 2000 is no problem. See our just published paper in ACS Synthetic Biology for some tips. The length of time of tagmentation is not important as long as it goes to completion, i.e. all the transposomes have tagmented the DNA sample. Tagmentation is stoichiometric, not catalytic The amount of DNA going into the tagmentation reaction is critical. Too much DNA will give fragments too large to be amplified and too little DNA will give small fragments that will be lost during SPRI. If you are using the Nextera XT kit and protocol, the extension time should be fine. Most likely you need to quantify and dilute your DNA more carefully.
Thanks for your answer I will take a look at your paper (reference?).

Anyway, DNA was quantified with Qubit prior to being diluted so I suppose everything was correct but I will check it again...

Does anyone have any other clues?
chariko is offline   Reply With Quote
Old 06-11-2015, 03:36 AM   #32
nucacidhunter
Jafar Jabbari
 
Location: Melbourne

Join Date: Jan 2013
Posts: 1,179
Default

There could be many reasons, but considering Bioanalyzer trace I would check following:
1- issues with pipetting most likely calibration which would cause errors in quantification and normalisation

2- adequate mixing of reactions
nucacidhunter is offline   Reply With Quote
Old 03-31-2016, 07:05 AM   #33
anamar
Junior Member
 
Location: Zürich

Join Date: Sep 2011
Posts: 4
Default

Quote:
Originally Posted by pmiguel View Post
Actually, I think that sample 2 has some >12 kb stuff in it that ended up running into sample 3. Which may sound crazy, but over time I have come to the conclusion that some lanes share part of the same paths. So if they do not completely clear, high molecular weight stuff from an earlier well can end up in a later one.



--
Phillip


Hello Phillip, Doing my library prep and after amplification with SMARTER kit I am almost certain that some huge molecules move to the next well in the bioanalyzer HS chips. I know this is a very old post from you but have you tested this and/ or have alternative explanations, Best, Ana
anamar is offline   Reply With Quote
Old 03-31-2016, 07:15 AM   #34
M4TTN
Member
 
Location: UK

Join Date: Jan 2014
Posts: 74
Default

Hi anamar,

I am pretty sure that all the Bioanalyser lanes converge into one area on the chip - at the point that they are "analysed". Each sample reaches there at different times based on the relative lengths of the paths on the Bioanalyser chip.

So, yes, if what I just stated is correct, I think it is entirely possible that very large molecules could migrate so slowly that they "contaminate" the lane trace of the next sample that passes through the analysis/quantification area.
M4TTN is offline   Reply With Quote
Old 04-18-2016, 03:17 PM   #35
fanli
Senior Member
 
Location: California

Join Date: Jul 2014
Posts: 198
Default

Quote:
Originally Posted by creeves View Post
A library with average size up to 2000 is no problem. See our just published paper in ACS Synthetic Biology for some tips. The length of time of tagmentation is not important as long as it goes to completion, i.e. all the transposomes have tagmented the DNA sample. Tagmentation is stoichiometric, not catalytic The amount of DNA going into the tagmentation reaction is critical. Too much DNA will give fragments too large to be amplified and too little DNA will give small fragments that will be lost during SPRI. If you are using the Nextera XT kit and protocol, the extension time should be fine. Most likely you need to quantify and dilute your DNA more carefully.
@creeves, can you elaborate on why the tagmentation is stoichiometric? If I understand correctly, it is cut and paste - so you eventually run out of adapter sequence to "cut"? Quoted from http://genome.cshlp.org/content/24/12/2033.full:
Quote:
Transposition works through a “cut-and-paste” mechanism, where the Tn5 excises itself from the donor DNA and inserts into a target sequence, creating a 9-bp duplication of the target
fanli is offline   Reply With Quote
Old 04-18-2016, 05:25 PM   #36
luc
Senior Member
 
Location: US

Join Date: Dec 2010
Posts: 342
Default

Fanli,
the transposase in the kit does not have any donor DNA available to it. The enzyme needs to "loaded" beforehand with oligos; thus each enzyme molecule can cut only once (? or twice?).
The appeal of the Nextera protocol for most applications is an enigma to me.
luc is offline   Reply With Quote
Old 04-19-2016, 10:20 AM   #37
fanli
Senior Member
 
Location: California

Join Date: Jul 2014
Posts: 198
Default

@luc, Thanks that makes more sense. Seems to be a really inefficient use of enzyme though IMO. For future reference, here's another helpful link:
http://bitesizebio.com/13567/too-goo...ra-do-for-you/

We got sucked into Nextera from the Fluidigm -> Clontech protocol. :/
fanli is offline   Reply With Quote
Old 04-19-2016, 10:27 AM   #38
pmiguel
Senior Member
 
Location: Purdue University, West Lafayette, Indiana

Join Date: Aug 2008
Posts: 2,291
Default

Quote:
Originally Posted by fanli View Post
@luc, Thanks that makes more sense. Seems to be a really inefficient use of enzyme though IMO. For future reference, here's another helpful link:
http://bitesizebio.com/13567/too-goo...ra-do-for-you/

We got sucked into Nextera from the Fluidigm -> Clontech protocol. :/
What problem are you having? The Fluidigm FC1 protocol generates cDNA that seems to work well with the 1/4 Nextera reactions recommended by Fluidigm.

Did you determine the amount of cDNA to add using fluorimetry?

For 1-2 96-well plates the Nextera protocol is fast and effective. Of course if you are doing one of those new 600 well Fluidigm chips it would be a substantial amount of scale-up...

--
Phillip
pmiguel is offline   Reply With Quote
Old 04-19-2016, 10:32 AM   #39
fanli
Senior Member
 
Location: California

Join Date: Jul 2014
Posts: 198
Default

Yeah, we haven't had any issues with the C1 libraries. Our issues actually come from bulk cell (albeit low input) RNA-seq and shotgun metagenomics libraries.

Not to get too off-topic, but we end up with fragment sizes that are somewhat longer than optimal for Nextseq 500, but similar size to the C1 libraries. Loading the recommended concentration (~1.8pM) gets us good cluster density, etc. for C1 libraries, but grossly underclustered flow cells for the other two types (19K/mm2 for our last run of RNA-seq). Long story short, we're trying to figure out why this is happening and just got thinking about the tagmentation. But if the fragment distribution after tagmentation is similar...then they should cluster similarly on a flow cell right?
fanli is offline   Reply With Quote
Old 04-19-2016, 11:19 AM   #40
pmiguel
Senior Member
 
Location: Purdue University, West Lafayette, Indiana

Join Date: Aug 2008
Posts: 2,291
Default

So qPCR (eg, KAPA) titration isn't predicting the cluster density correctly?

--
Phillip
pmiguel is offline   Reply With Quote
Reply

Tags
bioanalyzer, nextera

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off




All times are GMT -8. The time now is 08:36 AM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2018, vBulletin Solutions, Inc.
Single Sign On provided by vBSSO