SEQanswers

SEQanswers (http://seqanswers.com/forums/index.php)
-   Sample Prep / Library Generation (http://seqanswers.com/forums/forumdisplay.php?f=25)
-   -   Cannot Normalize DNA for GBS/RadSeq, Nonsense Qbit results (http://seqanswers.com/forums/showthread.php?t=50707)

Myrmex 03-02-2015 11:07 AM

Cannot Normalize DNA for GBS/RadSeq, Nonsense Qbit results
 
Our lab is carrying out an initial pilot test of two different RADseq/GBS protocols to see which will work best for us. Both protocols emphasize the importance of normalizing DNA extractions before beginning the library prep using a Qbit or similar to around 10ľ20 ng/Ál. So far this deceptively simple first step has been near impossible:

I measure my 48 samples on the Qbit, and then after individually diluting the samples to bring them to the same concentration, I remeasure and the values are most often nowhere near where expected. Some are in the ballpark (possible by coincidence), many vary from 5ľ100% from where they should be. So for example, I will measure say four samples which read around 20 ng/Ál and after diluting them all to half their original concentration and re-measuring, I get varied results like 3 ng/Ál, 9 ng/Ál, 19 ng/Ál, and 22 ng/Ál, instead of the expected 10 ng/Ál. Since this problem arose, I have done just about every iteration of troubleshooting I can think of over the past month and can't make these non-sensical values go away. I have:

- used the broad-range and high-sensitivity kits.
- ordered a new kit to see if our old one was compromised.
- used a different qBit at a different facility.
- troubleshooted with "cleaner" DNA (e.g. lambda and oligos from biotech companies)

....and the above-mentioned problem still occurs in all of these situations.

It would be great if I could get some input from someone who has faced similar problems and what they did about it. At this point my instinct is to measure the samples once, and just trust the number and move on, but I am very nervous about having over- or under-represented samples in multiplexed library. I am very curious to know what is actually happening behind the scenes when someone says something like "We normalized all DNA to 20ng/Ál", at the beginning of a library prep protocol, as I am starting to feel like this is basically impossible...


Thanks in advance.

SNPsaurus 03-02-2015 12:37 PM

Have you run out the samples on a gel to look at the relative levels? Not that I understand your problem, but it is good to get an orthogonal measurement when things look weird (and to check for quality issues as well).

Myrmex 03-02-2015 12:48 PM

Thanks for the suggestion. We haven't run gels yet. I don't have any experience running gels for quantification but have been reading about it. That seems like a good side check that the input samples are at least relatively at the same concentrations...

SNPsaurus 03-02-2015 12:59 PM

There are very careful ways to do it, and there are ladders to help estimate the amounts, but you can just run out an agarose gel and see what you see, and in this case that would help.

nucacidhunter 03-03-2015 10:09 AM

I can think of three reasons:

1- Pipetting inaccuracy caused by calibration/maintenance issues or operator
2- Presence of viscous material in DNA preps (mostly in non-column based extractions). This would show as streaks in wells of gel. This can be avoided by spinning DNA tube or plate at high speed for 5 min to precipitate non-soluble materials and transferring from top of wells.
3- Proper mixing of DNA with reagents or after dilution

Myrmex 03-03-2015 12:49 PM

Quote:

Originally Posted by nucacidhunter (Post 161508)
I can think of three reasons:

1- Pipetting inaccuracy caused by calibration/maintenance issues or operator
2- Presence of viscous material in DNA preps (mostly in non-column based extractions). This would show as streaks in wells of gel. This can be avoided by spinning DNA tube or plate at high speed for 5 min to precipitate non-soluble materials and transferring from top of wells.
3- Proper mixing of DNA with reagents or after dilution

Okay, thank you, this makes a lot of sense. We do have viscous material in our CTAB extractions which is one of the ideas we came up with for why this is happening. When I decrease the volume of the sample in a SpeedVac from 100Ál to 50Ál the material gets a lot more viscous. I have been trying to mix things very well before measuring on the Qbit but now I realize that maybe I should be spinning them down first and taking only the water liquid on the top.

The other option I guess is to re-extract with a column extraction- I was just worried about getting enough DNA, as we are already riding a thin line far as that is concerned...

nucacidhunter 03-03-2015 01:11 PM

One way to get rid of viscous material is column clean-up. In this case sample need to be diluted before addition of binding buffer (as much as kit binding buffer volumes allows) to prevent clogging columns. In addition DNA concentration can be adjusted by elution volume.

Myrmex 03-03-2015 02:20 PM

okay thank you very much—I will go from there... Do you have any feeling for whether or not I should be concerned about DNA loss with filter extraction, or filter cleanups? I know that is a common fear with filters but don't know how founded it is. We have been told by the manufacturer that there should be 95% recovery unless pieces are over 50kb. I have no idea if we have pieces of DNA that big or not. I assume that I am probably overly-fearful since I imagine that others have done GBS/RADseq protocols with filter extractions of DNA...

nucacidhunter 03-03-2015 11:09 PM

In this case you need to do clean up only as DNA have already been extracted. Sample loss depends on column specifications but one can maximise recovery by eluting with hot buffer and double elution. In my experience 10-15% loss is normal. In your case it will depend on how strongly DNA is bound to viscous material as well which may go through column. Best approach is to trial with less precious sample by quantifying with dsDNA specific reagents before and after cleans up because dsDNA will contribute to final library.

Myrmex 03-04-2015 09:29 AM

Okay, that makes a lot of sense. Thanks so much for your help everyone, we've been struggling with this for a while. It's nice to have something else to go off of.

Terminator 03-04-2015 11:57 AM

Hello,

Did you re-run the undiluted DNA side by side with the diluted DNA as a control?

What do the raw fluorescence readings for controls look like between the two runs?

Assuming you have a homogeneous mixture of DNA with no clumping, try adding additional Qubit standards. I have seen a fair deal of variation in the past so now I run 10 controls (2 x each of 0, 25, 50, 75, and 100 ng) and plot it myself to calculate my DNA concentrations.

Myrmex 03-04-2015 12:11 PM

Quote:

Originally Posted by Terminator (Post 161602)
Hello,

Did you re-run the undiluted DNA side by side with the diluted DNA as a control?

What do the raw fluorescence readings for controls look like between the two runs?

Assuming you have a homogeneous mixture of DNA with no clumping, try adding additional Qubit standards. I have seen a fair deal of variation in the past so now I run 10 controls (2 x each of 0, 25, 50, 75, and 100 ng) and plot it myself to calculate my DNA concentrations.

I did run undiluted controls a couple of times but should start doing it every time. If I run undiluted controls over and over (without changing anything) there variation of maybe around 5–20% or so (which I have no problem with, at least it's qualitatively similar). The larger error comes in when we change the dilution and especially when we concentrate the DNA to a lower volume but then it is more viscous—so I imagine as suggested above that this is a serious source of error.

That is a great idea to do the extra standards across the range and then plot- thanks!

Terminator 03-04-2015 01:33 PM

Quote:

Originally Posted by Myrmex (Post 161604)
I did run undiluted controls a couple of times but should start doing it every time. If I run undiluted controls over and over (without changing anything) there variation of maybe around 5ľ20% or so (which I have no problem with, at least it's qualitatively similar). The larger error comes in when we change the dilution and especially when we concentrate the DNA to a lower volume but then it is more viscousŚso I imagine as suggested above that this is a serious source of error.

That is a great idea to do the extra standards across the range and then plot- thanks!

I'm not sure why the Qubit only requires two standards (seems crazy).

I found a post (I don't recall the specific thread) where a user recommended running larger volumes for dilute DNA samples. This may also be worth a shot for reducing variability.

Best of luck!

kerplunk412 03-04-2015 02:04 PM

I have noticed something similar when quantifying gDNA by Nanodrop. What I saw was values that would vary quite a bit when reading different "drops" from the same tube of gDNA. This was solved by vortexing the DNA for 10 seconds. The idea is that the genomic DNA molecules are so large that one microliter might have varying amount of these large DNA molecules. Following vortexing the gDNA is in much smaller fragments, which allows it to exist more evenly in solution such that every microliter will have a much more similar amount of DNA. Think of it of grabbing handfuls of sand versus handfuls of medium sized rocks and weighing them. The handful of sand will be very close to the same weight each time, but the rocks will vary much more. I haven't tested the theory about large DNA vs sheared DNA, but we have tested vortexing DNA for 10 seconds prior to reading on the Nanodrop and it definitely results in much more consistent readings.

nucacidhunter 03-05-2015 12:56 AM

Quote:

Originally Posted by Terminator (Post 161607)
I'm not sure why the Qubit only requires two standards (seems crazy).

Linear detection range of PicoGreen is four orders of magnitude in 1 ng/ml to 1000ng/ml DNA concentration. As far as one calibrates fluorometer at 0 and 1000 range there is no need to any other concentration in between or standard curve. It seems to be waste of money and time. With correct calibration one needs only to multiply the fluorescence value in dilution factor to calculate original concentration of DNA sample.

nucacidhunter 03-05-2015 12:58 AM

Quote:

Originally Posted by kerplunk412 (Post 161608)
I have noticed something similar when quantifying gDNA by Nanodrop. What I saw was values that would vary quite a bit when reading different "drops" from the same tube of gDNA. This was solved by vortexing the DNA for 10 seconds. The idea is that the genomic DNA molecules are so large that one microliter might have varying amount of these large DNA molecules. Following vortexing the gDNA is in much smaller fragments, which allows it to exist more evenly in solution such that every microliter will have a much more similar amount of DNA. Think of it of grabbing handfuls of sand versus handfuls of medium sized rocks and weighing them. The handful of sand will be very close to the same weight each time, but the rocks will vary much more. I haven't tested the theory about large DNA vs sheared DNA, but we have tested vortexing DNA for 10 seconds prior to reading on the Nanodrop and it definitely results in much more consistent readings.

Mass of genome in one human cell is 6.6 pg, so in a DNA solution of 10 ng/ul we would have equivalent of DNA from 1515 cells which would be 45,450,000 fragments of 100kb. Most standard extraction methods will result in fragments less than 100kb. So, I do not see how one can justify that 45.5 million fragments in 1ul will aggregate in a solution to give 10x variation in consecutive reads. I think just a gentle flick would be enough to have a homogenous solution (if sample was frozen) and vortexing definitely would damage large DNA fragments.

Terminator 03-05-2015 11:36 AM

Quote:

Originally Posted by nucacidhunter (Post 161625)
Linear detection range of PicoGreen is four orders of magnitude in 1 ng/ml to 1000ng/ml DNA concentration. As far as one calibrates fluorometer at 0 and 1000 range there is no need to any other concentration in between or standard curve. It seems to be waste of money and time. With correct calibration one needs only to multiply the fluorescence value in dilution factor to calculate original concentration of DNA sample.

The 500 reaction kit is cheap (60 cents/sample) and it takes a matter of minutes to add a few extra standards. You are correct regarding the linear detection range; however, I think you should review linear regression.

kerplunk412 03-05-2015 03:47 PM

Quote:

Originally Posted by nucacidhunter (Post 161626)
Mass of genome in one human cell is 6.6 pg, so in a DNA solution of 10 ng/ul we would have equivalent of DNA from 1515 cells which would be 45,450,000 fragments of 100kb. Most standard extraction methods will result in fragments less than 100kb. So, I do not see how one can justify that 45.5 million fragments in 1ul will aggregate in a solution to give 10x variation in consecutive reads. I think just a gentle flick would be enough to have a homogenous solution (if sample was frozen) and vortexing definitely would damage large DNA fragments.

Your logic makes sense to me, so maybe the difference in size before and after vortexing does not explain my observations. However, I tested this fairly rigorously and a few of my colleagues have tried this as well, so I can say with confidence that with the gDNA samples I was working with a gentle flick was not enough to get a consistent reading, vortexing was required. As far as damaging the DNA, I am pretty sure 10 seconds of vortexing will not cause enough DNA fragmentation to matter for most NGS applications. If it was that easy to fragment DNA into small pieces no one would need to buy a Covaris!

Edit: I should also mention that the variation seen before vortexing was at most about 2x. Variation after vortexing was ~1%.


All times are GMT -8. The time now is 12:01 AM.

Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2020, vBulletin Solutions, Inc.