SEQanswers

Go Back   SEQanswers > Applications Forums > Sample Prep / Library Generation



Similar Threads
Thread Thread Starter Forum Replies Last Post
RADseq flyingoyster Illumina/Solexa 28 05-24-2018 12:14 PM
NGS library prep method for RRS/GBS/RADseq of degraded/ancient DNA? BioGenomics Sample Prep / Library Generation 6 01-14-2015 08:46 PM
mRNA library quantification BA/qbit/qPCR riehle Sample Prep / Library Generation 9 09-18-2012 09:41 AM
how to define nonsense mutation yuanzhi Bioinformatics 16 12-01-2010 10:28 AM
Cufflinks error - nonsense gene merge Daniel Bioinformatics 2 08-23-2010 11:31 PM

Reply
 
Thread Tools
Old 03-02-2015, 11:07 AM   #1
Myrmex
Junior Member
 
Location: Chicago, IL

Join Date: Dec 2014
Posts: 9
Default Cannot Normalize DNA for GBS/RadSeq, Nonsense Qbit results

Our lab is carrying out an initial pilot test of two different RADseq/GBS protocols to see which will work best for us. Both protocols emphasize the importance of normalizing DNA extractions before beginning the library prep using a Qbit or similar to around 10ľ20 ng/Ál. So far this deceptively simple first step has been near impossible:

I measure my 48 samples on the Qbit, and then after individually diluting the samples to bring them to the same concentration, I remeasure and the values are most often nowhere near where expected. Some are in the ballpark (possible by coincidence), many vary from 5ľ100% from where they should be. So for example, I will measure say four samples which read around 20 ng/Ál and after diluting them all to half their original concentration and re-measuring, I get varied results like 3 ng/Ál, 9 ng/Ál, 19 ng/Ál, and 22 ng/Ál, instead of the expected 10 ng/Ál. Since this problem arose, I have done just about every iteration of troubleshooting I can think of over the past month and can't make these non-sensical values go away. I have:

- used the broad-range and high-sensitivity kits.
- ordered a new kit to see if our old one was compromised.
- used a different qBit at a different facility.
- troubleshooted with "cleaner" DNA (e.g. lambda and oligos from biotech companies)

....and the above-mentioned problem still occurs in all of these situations.

It would be great if I could get some input from someone who has faced similar problems and what they did about it. At this point my instinct is to measure the samples once, and just trust the number and move on, but I am very nervous about having over- or under-represented samples in multiplexed library. I am very curious to know what is actually happening behind the scenes when someone says something like "We normalized all DNA to 20ng/Ál", at the beginning of a library prep protocol, as I am starting to feel like this is basically impossible...


Thanks in advance.
Myrmex is offline   Reply With Quote
Old 03-02-2015, 12:37 PM   #2
SNPsaurus
Registered Vendor
 
Location: Eugene, OR

Join Date: May 2013
Posts: 521
Default

Have you run out the samples on a gel to look at the relative levels? Not that I understand your problem, but it is good to get an orthogonal measurement when things look weird (and to check for quality issues as well).
__________________
Providing nextRAD genotyping and PacBio sequencing services. http://snpsaurus.com
SNPsaurus is offline   Reply With Quote
Old 03-02-2015, 12:48 PM   #3
Myrmex
Junior Member
 
Location: Chicago, IL

Join Date: Dec 2014
Posts: 9
Default

Thanks for the suggestion. We haven't run gels yet. I don't have any experience running gels for quantification but have been reading about it. That seems like a good side check that the input samples are at least relatively at the same concentrations...
Myrmex is offline   Reply With Quote
Old 03-02-2015, 12:59 PM   #4
SNPsaurus
Registered Vendor
 
Location: Eugene, OR

Join Date: May 2013
Posts: 521
Default

There are very careful ways to do it, and there are ladders to help estimate the amounts, but you can just run out an agarose gel and see what you see, and in this case that would help.
__________________
Providing nextRAD genotyping and PacBio sequencing services. http://snpsaurus.com
SNPsaurus is offline   Reply With Quote
Old 03-03-2015, 10:09 AM   #5
nucacidhunter
Jafar Jabbari
 
Location: Melbourne

Join Date: Jan 2013
Posts: 1,234
Default

I can think of three reasons:

1- Pipetting inaccuracy caused by calibration/maintenance issues or operator
2- Presence of viscous material in DNA preps (mostly in non-column based extractions). This would show as streaks in wells of gel. This can be avoided by spinning DNA tube or plate at high speed for 5 min to precipitate non-soluble materials and transferring from top of wells.
3- Proper mixing of DNA with reagents or after dilution
nucacidhunter is offline   Reply With Quote
Old 03-03-2015, 12:49 PM   #6
Myrmex
Junior Member
 
Location: Chicago, IL

Join Date: Dec 2014
Posts: 9
Default

Quote:
Originally Posted by nucacidhunter View Post
I can think of three reasons:

1- Pipetting inaccuracy caused by calibration/maintenance issues or operator
2- Presence of viscous material in DNA preps (mostly in non-column based extractions). This would show as streaks in wells of gel. This can be avoided by spinning DNA tube or plate at high speed for 5 min to precipitate non-soluble materials and transferring from top of wells.
3- Proper mixing of DNA with reagents or after dilution
Okay, thank you, this makes a lot of sense. We do have viscous material in our CTAB extractions which is one of the ideas we came up with for why this is happening. When I decrease the volume of the sample in a SpeedVac from 100Ál to 50Ál the material gets a lot more viscous. I have been trying to mix things very well before measuring on the Qbit but now I realize that maybe I should be spinning them down first and taking only the water liquid on the top.

The other option I guess is to re-extract with a column extraction- I was just worried about getting enough DNA, as we are already riding a thin line far as that is concerned...
Myrmex is offline   Reply With Quote
Old 03-03-2015, 01:11 PM   #7
nucacidhunter
Jafar Jabbari
 
Location: Melbourne

Join Date: Jan 2013
Posts: 1,234
Default

One way to get rid of viscous material is column clean-up. In this case sample need to be diluted before addition of binding buffer (as much as kit binding buffer volumes allows) to prevent clogging columns. In addition DNA concentration can be adjusted by elution volume.
nucacidhunter is offline   Reply With Quote
Old 03-03-2015, 02:20 PM   #8
Myrmex
Junior Member
 
Location: Chicago, IL

Join Date: Dec 2014
Posts: 9
Default

okay thank you very much—I will go from there... Do you have any feeling for whether or not I should be concerned about DNA loss with filter extraction, or filter cleanups? I know that is a common fear with filters but don't know how founded it is. We have been told by the manufacturer that there should be 95% recovery unless pieces are over 50kb. I have no idea if we have pieces of DNA that big or not. I assume that I am probably overly-fearful since I imagine that others have done GBS/RADseq protocols with filter extractions of DNA...
Myrmex is offline   Reply With Quote
Old 03-03-2015, 11:09 PM   #9
nucacidhunter
Jafar Jabbari
 
Location: Melbourne

Join Date: Jan 2013
Posts: 1,234
Default

In this case you need to do clean up only as DNA have already been extracted. Sample loss depends on column specifications but one can maximise recovery by eluting with hot buffer and double elution. In my experience 10-15% loss is normal. In your case it will depend on how strongly DNA is bound to viscous material as well which may go through column. Best approach is to trial with less precious sample by quantifying with dsDNA specific reagents before and after cleans up because dsDNA will contribute to final library.
nucacidhunter is offline   Reply With Quote
Old 03-04-2015, 09:29 AM   #10
Myrmex
Junior Member
 
Location: Chicago, IL

Join Date: Dec 2014
Posts: 9
Default

Okay, that makes a lot of sense. Thanks so much for your help everyone, we've been struggling with this for a while. It's nice to have something else to go off of.
Myrmex is offline   Reply With Quote
Old 03-04-2015, 11:57 AM   #11
Terminator
Junior Member
 
Location: Canada

Join Date: Oct 2014
Posts: 3
Default

Hello,

Did you re-run the undiluted DNA side by side with the diluted DNA as a control?

What do the raw fluorescence readings for controls look like between the two runs?

Assuming you have a homogeneous mixture of DNA with no clumping, try adding additional Qubit standards. I have seen a fair deal of variation in the past so now I run 10 controls (2 x each of 0, 25, 50, 75, and 100 ng) and plot it myself to calculate my DNA concentrations.
Terminator is offline   Reply With Quote
Old 03-04-2015, 12:11 PM   #12
Myrmex
Junior Member
 
Location: Chicago, IL

Join Date: Dec 2014
Posts: 9
Default

Quote:
Originally Posted by Terminator View Post
Hello,

Did you re-run the undiluted DNA side by side with the diluted DNA as a control?

What do the raw fluorescence readings for controls look like between the two runs?

Assuming you have a homogeneous mixture of DNA with no clumping, try adding additional Qubit standards. I have seen a fair deal of variation in the past so now I run 10 controls (2 x each of 0, 25, 50, 75, and 100 ng) and plot it myself to calculate my DNA concentrations.
I did run undiluted controls a couple of times but should start doing it every time. If I run undiluted controls over and over (without changing anything) there variation of maybe around 5–20% or so (which I have no problem with, at least it's qualitatively similar). The larger error comes in when we change the dilution and especially when we concentrate the DNA to a lower volume but then it is more viscous—so I imagine as suggested above that this is a serious source of error.

That is a great idea to do the extra standards across the range and then plot- thanks!
Myrmex is offline   Reply With Quote
Old 03-04-2015, 01:33 PM   #13
Terminator
Junior Member
 
Location: Canada

Join Date: Oct 2014
Posts: 3
Default

Quote:
Originally Posted by Myrmex View Post
I did run undiluted controls a couple of times but should start doing it every time. If I run undiluted controls over and over (without changing anything) there variation of maybe around 5ľ20% or so (which I have no problem with, at least it's qualitatively similar). The larger error comes in when we change the dilution and especially when we concentrate the DNA to a lower volume but then it is more viscousŚso I imagine as suggested above that this is a serious source of error.

That is a great idea to do the extra standards across the range and then plot- thanks!
I'm not sure why the Qubit only requires two standards (seems crazy).

I found a post (I don't recall the specific thread) where a user recommended running larger volumes for dilute DNA samples. This may also be worth a shot for reducing variability.

Best of luck!
Terminator is offline   Reply With Quote
Old 03-04-2015, 02:04 PM   #14
kerplunk412
Senior Member
 
Location: Bioo Scientific, Austin, TX, USA

Join Date: Jun 2012
Posts: 119
Default

I have noticed something similar when quantifying gDNA by Nanodrop. What I saw was values that would vary quite a bit when reading different "drops" from the same tube of gDNA. This was solved by vortexing the DNA for 10 seconds. The idea is that the genomic DNA molecules are so large that one microliter might have varying amount of these large DNA molecules. Following vortexing the gDNA is in much smaller fragments, which allows it to exist more evenly in solution such that every microliter will have a much more similar amount of DNA. Think of it of grabbing handfuls of sand versus handfuls of medium sized rocks and weighing them. The handful of sand will be very close to the same weight each time, but the rocks will vary much more. I haven't tested the theory about large DNA vs sheared DNA, but we have tested vortexing DNA for 10 seconds prior to reading on the Nanodrop and it definitely results in much more consistent readings.
kerplunk412 is offline   Reply With Quote
Old 03-05-2015, 12:56 AM   #15
nucacidhunter
Jafar Jabbari
 
Location: Melbourne

Join Date: Jan 2013
Posts: 1,234
Default

Quote:
Originally Posted by Terminator View Post
I'm not sure why the Qubit only requires two standards (seems crazy).
Linear detection range of PicoGreen is four orders of magnitude in 1 ng/ml to 1000ng/ml DNA concentration. As far as one calibrates fluorometer at 0 and 1000 range there is no need to any other concentration in between or standard curve. It seems to be waste of money and time. With correct calibration one needs only to multiply the fluorescence value in dilution factor to calculate original concentration of DNA sample.
nucacidhunter is offline   Reply With Quote
Old 03-05-2015, 12:58 AM   #16
nucacidhunter
Jafar Jabbari
 
Location: Melbourne

Join Date: Jan 2013
Posts: 1,234
Default

Quote:
Originally Posted by kerplunk412 View Post
I have noticed something similar when quantifying gDNA by Nanodrop. What I saw was values that would vary quite a bit when reading different "drops" from the same tube of gDNA. This was solved by vortexing the DNA for 10 seconds. The idea is that the genomic DNA molecules are so large that one microliter might have varying amount of these large DNA molecules. Following vortexing the gDNA is in much smaller fragments, which allows it to exist more evenly in solution such that every microliter will have a much more similar amount of DNA. Think of it of grabbing handfuls of sand versus handfuls of medium sized rocks and weighing them. The handful of sand will be very close to the same weight each time, but the rocks will vary much more. I haven't tested the theory about large DNA vs sheared DNA, but we have tested vortexing DNA for 10 seconds prior to reading on the Nanodrop and it definitely results in much more consistent readings.
Mass of genome in one human cell is 6.6 pg, so in a DNA solution of 10 ng/ul we would have equivalent of DNA from 1515 cells which would be 45,450,000 fragments of 100kb. Most standard extraction methods will result in fragments less than 100kb. So, I do not see how one can justify that 45.5 million fragments in 1ul will aggregate in a solution to give 10x variation in consecutive reads. I think just a gentle flick would be enough to have a homogenous solution (if sample was frozen) and vortexing definitely would damage large DNA fragments.
nucacidhunter is offline   Reply With Quote
Old 03-05-2015, 11:36 AM   #17
Terminator
Junior Member
 
Location: Canada

Join Date: Oct 2014
Posts: 3
Default

Quote:
Originally Posted by nucacidhunter View Post
Linear detection range of PicoGreen is four orders of magnitude in 1 ng/ml to 1000ng/ml DNA concentration. As far as one calibrates fluorometer at 0 and 1000 range there is no need to any other concentration in between or standard curve. It seems to be waste of money and time. With correct calibration one needs only to multiply the fluorescence value in dilution factor to calculate original concentration of DNA sample.
The 500 reaction kit is cheap (60 cents/sample) and it takes a matter of minutes to add a few extra standards. You are correct regarding the linear detection range; however, I think you should review linear regression.
Terminator is offline   Reply With Quote
Old 03-05-2015, 03:47 PM   #18
kerplunk412
Senior Member
 
Location: Bioo Scientific, Austin, TX, USA

Join Date: Jun 2012
Posts: 119
Default

Quote:
Originally Posted by nucacidhunter View Post
Mass of genome in one human cell is 6.6 pg, so in a DNA solution of 10 ng/ul we would have equivalent of DNA from 1515 cells which would be 45,450,000 fragments of 100kb. Most standard extraction methods will result in fragments less than 100kb. So, I do not see how one can justify that 45.5 million fragments in 1ul will aggregate in a solution to give 10x variation in consecutive reads. I think just a gentle flick would be enough to have a homogenous solution (if sample was frozen) and vortexing definitely would damage large DNA fragments.
Your logic makes sense to me, so maybe the difference in size before and after vortexing does not explain my observations. However, I tested this fairly rigorously and a few of my colleagues have tried this as well, so I can say with confidence that with the gDNA samples I was working with a gentle flick was not enough to get a consistent reading, vortexing was required. As far as damaging the DNA, I am pretty sure 10 seconds of vortexing will not cause enough DNA fragmentation to matter for most NGS applications. If it was that easy to fragment DNA into small pieces no one would need to buy a Covaris!

Edit: I should also mention that the variation seen before vortexing was at most about 2x. Variation after vortexing was ~1%.

Last edited by kerplunk412; 03-05-2015 at 03:52 PM.
kerplunk412 is offline   Reply With Quote
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off




All times are GMT -8. The time now is 05:19 AM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2020, vBulletin Solutions, Inc.
Single Sign On provided by vBSSO