Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Doing it!! My difficult library is 2.7nM. Too lazy to bead-concentrate and re-quant. Plus I like to try useful mods. Loading a 600 cycle kit now. Thanks!!

    Comment


    • #17
      Yaximik, are you saying that when you neutralize with acetic acid, you don't dilute 100-fold with HT1? You just make it up to 600 ul final?

      Just wanted clarification...

      Best

      Austin

      Comment


      • #18
        Hi!,
        I make 1 N NaOH in 15 mL tube. And make fresh 0.2 N NaOH every time. I have not tested longer times but 1 N NaOH is stable in a tightly capped 15 mL conical for at least 6 weeks at room temperature in my hands. I do not let other people touch my tube If the cap is left open long time, CO2 in air may change pH in long term.

        One important thing is that do not use more than 1 mm M final concentration of NaOH in denatured libraries. I would recommend do not be even close to it. I think hybridization is very sensitive to the amount of NaOH especially around 1mM. I start with 4 nM library to make sure I am not getting close to 1 mM NaOH limit. If you have lower concentrations just use freshly prepared 0.1 N or 0.15N NaOH instead of 0.2 N, it works without any problem. Since I try to be far from max recommended 1 mM NaOH I have never had any problem (previously had 2-3 failures).

        Hope helps!

        Comment


        • #19
          austinso, Yaximiks recipe does diulte 100fold with hyb buffer. Since he(?) does 3ul NaOH and 3ul of pool then the final dilution achieves this.

          Thanks rnaeye for the clarification. I almost tried this instead yesterday to get my concentration up, but figured I should let the denature go a bit longer if I used a lower NaOH concentration. I found this paper (http://onlinelibrary.wiley.com/doi/1....201200934/pdf) which seemed to indicate that NaOH as low as 0.5mM might not completely denature the sample.

          I wanted to report my run metrics since I was attempting Yaximiks neutralization mod. Raw library was 2.73nM, so following this protocol, my final (loading) pool concentration was 13.65pM + 10uL 20pM PhiX = 13.98 (call it 14pM loaded). Density is 1546 +/- 46 k/mm, 81.5%PF (28M reads), 17G projected yield, 93.2% >q30 after 220 cycles. In a word, fantastic!!

          Comment


          • #20
            I was referring to this statement, AKrohn:
            Sometimes I intentionally dilute 10 nM libraries two or three times to use more reliable volumes than, say 0.75 or 1 ul, and use accordingly 2-3 times more 0.2 M NaOH.
            I wasn't sure if that meant that he had tried denaturing a larger volume of a more dilute library solution, with a corresponding increase in volume of 0.2N NaOH, then neutralizing with 1 ul of 1M HOAc.

            As the net effect with HT1 is neutralization in the absence of HOAc, so in the presence of HOAc, maybe that volume is not as important (i.e. just make up to 600 ul).

            Expensive experiment to do, so I thought I'd get clarification than do it myself.

            Neither here nor there, really...

            Comment


            • #21
              Dear Akrohn, I just wondered what the final output/quality was like?

              Originally posted by AKrohn View Post
              austinso, Yaximiks recipe does diulte 100fold with hyb buffer. Since he(?) does 3ul NaOH and 3ul of pool then the final dilution achieves this.

              Thanks rnaeye for the clarification. I almost tried this instead yesterday to get my concentration up, but figured I should let the denature go a bit longer if I used a lower NaOH concentration. I found this paper (http://onlinelibrary.wiley.com/doi/1....201200934/pdf) which seemed to indicate that NaOH as low as 0.5mM might not completely denature the sample.

              I wanted to report my run metrics since I was attempting Yaximiks neutralization mod. Raw library was 2.73nM, so following this protocol, my final (loading) pool concentration was 13.65pM + 10uL 20pM PhiX = 13.98 (call it 14pM loaded). Density is 1546 +/- 46 k/mm, 81.5%PF (28M reads), 17G projected yield, 93.2% >q30 after 220 cycles. In a word, fantastic!!

              Comment


              • #22
                Dear Mcnelson, I realise this is quite an old thread, but I am intrigued by your comment below:
                Originally posted by mcnelson.phd View Post
                Last words I really have are that we have also given up entirely on using the XT library normalization and denaturing process and instead take the post-PCR cleaned libraries and treat them like standard Nextera libraries. Since we've started doing that we haven't had the horribly uneven pooling or over clustering issues we had when using the official normalization and denaturing protocol.
                We are about to embark of MiSeq runs using the NexteraXT kit. A big reason for choosing this kit is the advertised ability to normalise libraries for multiplexing (we'll initially be plexing 16 libraries on a MiSeq 2x300 run) without needing to use bioanalyser/picogreen etc of each library. What problems did you have with NexteraXT exactly?

                Our starting DNA samples will be yeast genomic DNA (12.5Mb genome size).

                Comment


                • #23
                  M4TTN: Results looked splendid. Output as anticipated (17G), ~80%>q30.

                  I hadn't seen mcnelson's post despite having posted in this thread myself previously, but we also have abandoned the bead normalization step for what sounds like similar reasons. Using the Illumina protocol, we might hit about 750k clustering, but we also might only hit about 200k, which is very low output. Further, the Illumina protocol doesn't address the shorter products that are produced as a result of non-size-selected, enzymatically-sheared DNA. If you do a 0.5X bead prep, you can get rid of everything smaller than about 200bp, but this also seems to lead to lower cluster density. These small products also present a problem for longer read lengths since you are sequencing a lot of adapter with a 2x300 kit. However, you probably have plenty of library to achieve adequate clustering. We also have found that increasing the PCR step to 15 cycles (from 12) makes all the difference in achieving adequate quantity. Others may advise against this due to the obvious bias concern, but I think this is less important than getting enough data for your study.

                  Our protocol:
                  1) After 15 cycles of PCR, do 0.5X bead cleanup
                  2) Run libraries on bioanalyzer to get an idea of size distribution. Use high sensitivity chip.
                  3) qPCR quantify libraries using bioanalyzer size (approximate, usually around 500bp, even if the main peak is closer to 1000 due to all the small fragments)
                  4) Pool evenly to a final sum of 4nM prior to denaturing.
                  5) Load nextera libraries at about 14pM.

                  Make sure you specify in your sample sheet to remove adapters, though I bet you still see kmer enrichment in fastQC. These are likely the short bits of addapters that weren't read far enough to be removed. Use fastq-mcf (ea-utils) with about 16 bases of the 3' end of the adapter and 80% identity to remove most of these stragglers from your raw fastq files. If operating in basespace, I have no idea how to address this shortcoming.

                  Relax and try not to fret about cluster density!!

                  Comment


                  • #24
                    Thanks for the detailed response AKrohn.

                    I find it very odd that the amount of DNA coming out of the Nextera sample normalisation is so poor. It hardly makes the kit fit for purpose. Have you complained? To make our experiments cost effective, we really need to hit >50% of the maximum output from each flowcell run.

                    The sample normalisation was one of the reasons for choosing the XT kit. We are not DNA sample limited at all, so if we end up having to do traditional normalisation etc, we may be better off shearing samples individually in a BioRuptor or Covaris (we have both, but the latter will be quite tedious for multiple samples).

                    Questions (if you don't mind):

                    1: We really need to get as close to 600 bp read length as possible (non-overlapping PE reads would even be useful). If we don't get the read lengths we need, we were considering experimenting with using more DNA than 1ng (perhaps just to 2 ng) - to increase the average transposon-induced fragment length. Have you ever tried that?

                    2: With the AMPure beds, Illumina recommend adding just 25ul beads to 50ul sample (post-PCR) for a 2x250 bp run. For 2x300, we were considering adding even less - just 20ul. Any experience here?

                    3: How many samples are you successfully multiplexing?

                    4: Were you using the equipment specced by Illumina for the bead normalisation? (an expensive VWR microplate shaker at 1800rpm). We aim to purchase something much cheaper called a Bioshake iQ, but which still can shake at speeds of up to 3000rpm.

                    5: I note that if you follow the Illumina denaturation guidelines, the final NaOH concentration is 2 mM:

                    Illumina recommend denaturing with 30 ul 0.1 N NaOH, diluting with equal volume of LNS1 > to 0.05 N.
                    Then pool samples (still at 0.05 N)
                    Take 24 ul of pooled sample and dilute into a final volume of 600 ul in HT1.
                    This final dilution is only 25x, makign the final NaOH con = 2mM.

                    I wonder if that is the only thing that is going wrong with the clustering? i.e. a failure to neutralise the NaOH after pooling. After all, that is where this thread started out. Did you try using Acetic acid to neutralise the "bead-normalised" XT samples?

                    6. Finally, it seems somewhat odd (to me) that in the protocol the denatured libraries are first mixed with LNS1 before storing and/or pooling. In normal sample preps the denaturation is done just before loading isn't it? Couldn't the libraries renature under storage? Or does LNS1 not really neutralise the NaOH at all?


                    Thanks for the idea of alternatively quantifying libraries with BA and qPCR.


                    Originally posted by AKrohn View Post
                    M4TTN: Results looked splendid. Output as anticipated (17G), ~80%>q30.

                    I hadn't seen mcnelson's post despite having posted in this thread myself previously, but we also have abandoned the bead normalization step for what sounds like similar reasons. Using the Illumina protocol, we might hit about 750k clustering, but we also might only hit about 200k, which is very low output. Further, the Illumina protocol doesn't address the shorter products that are produced as a result of non-size-selected, enzymatically-sheared DNA. If you do a 0.5X bead prep, you can get rid of everything smaller than about 200bp, but this also seems to lead to lower cluster density. These small products also present a problem for longer read lengths since you are sequencing a lot of adapter with a 2x300 kit. However, you probably have plenty of library to achieve adequate clustering. We also have found that increasing the PCR step to 15 cycles (from 12) makes all the difference in achieving adequate quantity. Others may advise against this due to the obvious bias concern, but I think this is less important than getting enough data for your study.

                    Our protocol:
                    1) After 15 cycles of PCR, do 0.5X bead cleanup
                    2) Run libraries on bioanalyzer to get an idea of size distribution. Use high sensitivity chip.
                    3) qPCR quantify libraries using bioanalyzer size (approximate, usually around 500bp, even if the main peak is closer to 1000 due to all the small fragments)
                    4) Pool evenly to a final sum of 4nM prior to denaturing.
                    5) Load nextera libraries at about 14pM.

                    Make sure you specify in your sample sheet to remove adapters, though I bet you still see kmer enrichment in fastQC. These are likely the short bits of addapters that weren't read far enough to be removed. Use fastq-mcf (ea-utils) with about 16 bases of the 3' end of the adapter and 80% identity to remove most of these stragglers from your raw fastq files. If operating in basespace, I have no idea how to address this shortcoming.

                    Relax and try not to fret about cluster density!!

                    Comment


                    • #25
                      1) I have. It works. Also dependent on genome size.

                      2) Illumina's recommendation is exactly a 0.5X cleanup. You do incur loss of target DNA at this concentration. Reducing to 0.4X, you could expect to lose even more DNA. I'm fairly certain it is this step that leads to poor success in bead normalization. We did a Truseq Custom Amplicon run last fall, and it has the same bead normalization and it was very even and clustered perfectly, but that was with a 1:1 cleanup. Every time I have done Nextera for genomes and tried to improve read length through stringent bead cleanups, bead normalization results in poor clustering.

                      3) We have done as many as 24 samples at a time with Nextera. Considering designing our own Nextera primers from the customer service letter so we can afford to do more since the index primer kit from Illumina is obnoxiously overpriced.

                      4)For shaking, I have a plate adapter for a fisher vortexer. It is a plastic disk with a foam top that you can cram a PCR plate into. If using half or non-skirted plates, 96 well racks will work to seat your plate. Set the vortexer to 2-3. No special equipment needed. Keep in mind that low quality samples will not prep as well as higher quality samples, but since you have plenty of DNA, this is not likely an issue for you.

                      5) I hadn't done the math on NaOH for Nextera, but this does seem like a problem for me. After our last crappy run I called Illumina. They comped me a sequencing kit, but I had effectively burned an entire Nextera kit since without doing your own normalization you skip some of the rather crucial (IMO) QC steps that tell you whether to bother pooling a sample or not. When it works it is splendid, but I think the time and money wasted on poor runs evens everything out. I only did the HOAc trick once, and those were Kozarewa-prepped samples.

                      6) My understanding is that you are creating ssDNA from your libraries prior to normalization. My suspicion is that the normalization works by attaching flowcell sequences to a finite number of beads. Thus, you need a certain minimum quantity of DNA per sample to occupy all the spaces on the beads to achieve ideal clustering. Lower concentrations will still pull down part of your libraries. I also suspect that with increasing number of samples, this step matters less since you are getting a proportional amount of DNA from each sample. Still, without actually quantifying libraries before loading, you have no idea how things "should" turn out before the run which makes the process much more stressful. You will sleep better with qPCR info. If you use KAPA Fast polymerase with evagreen (Biotium -- or else pony up for the much more expensive KAPA SYBR FAST qPCR mix) and using P5/P7 as primers, you can have your library quants done in under an hour. Not much of a burden if you ask me.

                      Comment


                      • #26
                        Thanks for your replies Akrohn. Things are becoming a lot clearer.

                        I can see that if the library fragments are quite variable in size, then yes, the recovery from a 0.5X AMPure cut could be very low yield. In which case it certainly makes sense to increase PCR cycles to generate more sample prior to size selection.

                        Regarding the bead normalisation: I guess we would need to know what are the components of LNA1 and LNB1 to determine how it works for definite, but your hypothesis seems reasonable. But if so (and the libraries are already ssDNA at this point) why the need for elution/denaturation using 0.1 N NaOH? Unless that is the only way to get the hybridised libraries to elute from the beads?

                        As you say, direct quantification of the PCR-amplified size-selected libraries seems like a simple alternative for relatively few libraries.

                        We were also considering synthesising the barcode primers, but we'll buy the kits to begin with I think - just for simplicity.

                        I reported the 2mM NaOH discrepancy to Illumina tech support today and they are investigating.

                        Comment


                        • #27
                          First, I think I said something slightly incorrectly. I didn't mean to say genome-size dependent, I meant "intact" DNA size dependent. Decent quality DNA extractions yield DNA sheared to around 25-30kb. If you are really careful, you can keep the shearing to a minimum and wind up with larger DNA (~50kb). If you need larger than that then you need to immobilize cells (eg in agarose) before extraction, but that is neither here nor there unless doing something like PFGE.

                          Careful increasing your amplification too much. There are two immediate concerns I can think of. One is PCR copies during genome assembly since these will artificially inflate the base call confidences and read depth. The other is that over-amplified libraries lead to the bioanalyzer anomaly. The bioanalyzer stain is dsDNA specific. Doing lots of amplification of randomly sheared DNA leads to heterodimerism among the different sequences, and they anneal to each other mainly at the ends where the sequencing adapters are perfectly complimentary. The rest of the DNA is forming bubbles and likey in multimeric formations that allow the dye to bind, but not as efficiently as to nice clean dsDNA. These bubbles also slow these heterodimers down as they migrate through any gel matrix. A subtly overamplified library will have a small shoulder peak that is a bit larger than the main peak. It is the main peak that is accurately representing average insert size. A very overamplified library will have a more convoluted shape and the true dsDNA peak may be difficult to locate. In this case, average insert size is nearly impossible to determine for sure.

                          Comment


                          • #28
                            Forgot to add why I replied in the first place. We order all of our own indexed primers and adapters from our synthesis source. Way cheaper than getting them as a kit. Simple desalted oligos are fine in my experience. If ordering a lot, get to know your rep, and order them in plates and pre-normalized. Your rep should cut you a decent discount if you are nonprofit, academic, or government.

                            Comment


                            • #29
                              Originally posted by AKrohn View Post
                              First, I think I said something slightly incorrectly. I didn't mean to say genome-size dependent, I meant "intact" DNA size dependent. Decent quality DNA extractions yield DNA sheared to around 25-30kb. If you are really careful, you can keep the shearing to a minimum and wind up with larger DNA (~50kb). If you need larger than that then you need to immobilize cells (eg in agarose) before extraction, but that is neither here nor there unless doing something like PFGE.
                              The only reason I would have thought the genome size is important is because of potential coverage concerns. 1ng of human is only a few thousand copies (on average). As to average fragment size, I would have thought that so long as it substantially exceeds the targeted/desired fragment length (lets say 2-3x longer on average), then subsequent insert sizes shouldn't be affected. After all, the only amplifiable molecules are those that receive two transposon integration events. End fragments don't amplify or cluster. At least that is how I understand it.

                              Originally posted by AKrohn View Post
                              Careful increasing your amplification too much. There are two immediate concerns I can think of. One is PCR copies during genome assembly since these will artificially inflate the base call confidences and read depth. The other is that over-amplified libraries lead to the bioanalyzer anomaly. The bioanalyzer stain is dsDNA specific. Doing lots of amplification of randomly sheared DNA leads to heterodimerism among the different sequences, and they anneal to each other mainly at the ends where the sequencing adapters are perfectly complimentary. The rest of the DNA is forming bubbles and likey in multimeric formations that allow the dye to bind, but not as efficiently as to nice clean dsDNA. These bubbles also slow these heterodimers down as they migrate through any gel matrix. A subtly overamplified library will have a small shoulder peak that is a bit larger than the main peak. It is the main peak that is accurately representing average insert size. A very overamplified library will have a more convoluted shape and the true dsDNA peak may be difficult to locate. In this case, average insert size is nearly impossible to determine for sure.
                              I didn't know about this. Is this what is called "bird nesting"?

                              If the annealing is mostly at the ends, I would have thought it only occurs once the available amplification primer pool has been sufficiently depleted. It is essentially a competition between primer binding and annealling between the fragments. I guess this is one area where the kit is not so good, since it is impossible to tweak the primer concentration. Illumina may supply just enough primers to make a limited number of amplicons within 12 cycles (making additional cycles problematic unless the starting material was lower).

                              With your custom oligos, have you tried adjusting the primer concentration? In fact, (if you don't mind sharing) what primer concentration do you use?

                              Another alternative would be to do the AMpure cleanup before (and after) PCR, thus both reducing the starting library complexity and increasing the number of large insert molecules.

                              Good point about over-representation of identical reads due to PCR. I guess that perfectly duplicate reads (in position/length) can be removed bioinformatically before alignment. But it is not something that I had previously thought about. Thanks!

                              I'll have to think about library complexity...

                              1ng yeast genome = 12.5 Mbp = 73800 copies > Nextera creates 25,000 pieces (average 500bp) = 1.8x10^9 fragments

                              Let's say 10% are recovered by a stringent AMPure size selection = 182 million unique fragments in the starting pool in each library prior to PCR.

                              That's at least 100x greater than the MiSeq cluster number per library (assuming we multiplex at least 10 samples per run). Which sounds good...I think...We'll probably multiplex 24.

                              Comment


                              • #30
                                You are right about the genome-size concern in terms of transposase efficiency, but in terms of fragment size, the main factor will be how intact your DNA truly is.

                                I've never heard the term bird nesting, but I like it. I have heard "christmas tree effect" and "jack straw."

                                For Nextera I am still working through a kit of oligos that a user bought a while back. I would think anywhere between 0.2 and 1uM would be appropriate.

                                I wouldn't just do a bead prep on a DNA sample. Instead, run it on a gel first. It should be larger than a very large ladder fragment. My largest ladder fragment is 8kb. HindII cut lambda has a fragment around 23kb. If you see a smear, then by all means do a bead cleanup. As long as your DNA prep was clean, you shouldn't need to. DNA checks on gels aren't used as much as they should be and are very useful. Make sure you load enough sample. Commit 10ul from a 100ul eluate if you can spare it.

                                Save money on your beads. Then you can use them all the time. http://enggen-nau.blogspot.com/2013/...-cleanups.html (Rohland & Reich, 2012).

                                Comment

                                Latest Articles

                                Collapse

                                • seqadmin
                                  Strategies for Sequencing Challenging Samples
                                  by seqadmin


                                  Despite advancements in sequencing platforms and related sample preparation technologies, certain sample types continue to present significant challenges that can compromise sequencing results. Pedro Echave, Senior Manager of the Global Business Segment at Revvity, explained that the success of a sequencing experiment ultimately depends on the amount and integrity of the nucleic acid template (RNA or DNA) obtained from a sample. “The better the quality of the nucleic acid isolated...
                                  03-22-2024, 06:39 AM
                                • seqadmin
                                  Techniques and Challenges in Conservation Genomics
                                  by seqadmin



                                  The field of conservation genomics centers on applying genomics technologies in support of conservation efforts and the preservation of biodiversity. This article features interviews with two researchers who showcase their innovative work and highlight the current state and future of conservation genomics.

                                  Avian Conservation
                                  Matthew DeSaix, a recent doctoral graduate from Kristen Ruegg’s lab at The University of Colorado, shared that most of his research...
                                  03-08-2024, 10:41 AM

                                ad_right_rmr

                                Collapse

                                News

                                Collapse

                                Topics Statistics Last Post
                                Started by seqadmin, 03-27-2024, 06:37 PM
                                0 responses
                                12 views
                                0 likes
                                Last Post seqadmin  
                                Started by seqadmin, 03-27-2024, 06:07 PM
                                0 responses
                                11 views
                                0 likes
                                Last Post seqadmin  
                                Started by seqadmin, 03-22-2024, 10:03 AM
                                0 responses
                                53 views
                                0 likes
                                Last Post seqadmin  
                                Started by seqadmin, 03-21-2024, 07:32 AM
                                0 responses
                                69 views
                                0 likes
                                Last Post seqadmin  
                                Working...
                                X