Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • To supplement above post, a logical look at Nextera and Nextera XT workflow indicates that XT index 1 primers (S5XX) have a biotin or other moiety at 5’ end. It serves both 5’ end blocking and normalisation by binding to limited number of streptavidin (or other depending on moiety) coated beads to normalise the DNA mass. Concentration of S5XX, N5XX and N7XX are similar when run on small RNA Chip. Nextera XT PCR is done with S5 and N7 only but to compensate for 50x more DNA input in Nextera they add PPC oligos which are complement of flow cell binding motives on adapters. This way the full adapter sequences are restored on tagmented fragments by N5 (S5), N7 primers and amplification mostly is done by PPC primers.

    Comment


    • Extremely helpful information Simone78 and nucacidhunter! Thank you very much!

      Comment


      • Originally posted by Simone78 View Post
        Hi,
        yes, I had similar issues when working with immune cells. I noticed that decreasing the TSO leads to lower yield and I wouldn´t do that, but reducing the ISPCR primer and oligodT (for the Smart-seq2 protocol) helps a bit, but it´s not the solution. Instead, use biotinylated primers (a biotin group at the 5´-end), which should prevent the formation of concatamers and primer dimers. An alternative approach would be to add 3 iso-nucleotides at the 5´end of your TSO, as described in PMID:20598146.
        Good luck!
        /Simone
        Thank you so much! I ended up ordering ones with biotin, since that seems the most straightforward (without changing length of the primer, etc.)

        In your experience, have you also seen whether time of elongation during RT also matter? Some people in my lab have extended the time at 42 to 3 hours...which seems could potentially result in more concatemers or weird priming.

        Comment


        • Originally posted by SunPenguin View Post
          Thank you so much! I ended up ordering ones with biotin, since that seems the most straightforward (without changing length of the primer, etc.)

          In your experience, have you also seen whether time of elongation during RT also matter? Some people in my lab have extended the time at 42 to 3 hours...which seems could potentially result in more concatemers or weird priming.
          Actually not. I tried the opposite, to reduce time for the RT to 15 min, the time that is now recommended for the new Superscript IV (and because I´m tired of waiting 1.5 hours!). Result: the yield was lower and the size slightly lower but not so much, considering is a 80% reduction. However, in my case, concatamers were not visible in any case. You could try and see if this makes things better.
          /Simone

          Comment


          • Superscript II concerns

            We would like to know anyone thinks our single-cell transcriptome results may be questionable due to the issue with superscript II that several people have raised. We've been using at least one of the SSII lots that have been questioned. I've attached cDNA traces (after Kapa amp). We are seeing sequence mapping percentages of 60-70% (for highly activated human lymphocytes) and 40-50% (for resting memory human lymphocytes).

            Do people think there's likely to be a problem in our data? Clearly we're generating a bit of product that doesn't depend on templates coming from cells (see the "no cell" controls at right in the slide). Is there too much of that stuff? What do people think of mapping percentages 40-50% in the data?

            Thanks very much!
            Eli
            Attached Files

            Comment


            • Originally posted by Simone78 View Post
              The original Nextera kit from Epicentre was reporting the conc of the oligos. In that kit they were using the i5 + i7 pairs (0.5 uM final) and other 2, the "PPC" mix (PCR Primer cocktail, which I think Illumina still has in the Nextera kit for input up to 50 ng). The PPC had a conc 20 times higher, that is 10 uM)
              Briefly I can tell you that:
              - I took the oligos from the Illumina website, ordered them from another vendor and used in the same ratio 20:1. The Illumina oligos have a phosphorothioate bond between the last 2 nucleotides at the 3´end (to make them resistant to nucleases) but I think they also have some kind of blocking group at the 5´end. Mine were not blocked but they worked well anyway. However, when tagmenting picogram or sub-picogram inputs of DNA the huge excess of unused primers led to a massive accumulation of dimers that could not removed with the bead purification. I guess that was due to the fact that they were not blcoked. Result: many reads were just coming from the adaptors. A solution would be to titrate the amount of primers to your input DNA.
              - If you plan to use the Nextera XT kit (that is: start from <1 ng DNA for tagmentation) you can dilute your adaptors 1:5 (at least) and you won´t see any difference. In this way the index kit becomes very affordable and you don´t have to care about dimers in your prep. If you, in parallel, also scale down the volume of your tagmentation reaction (20 ul for a Nextera XT kit is a huge waste!), the amount of index primers decreases even more. Even without liquid handling robots you can easily (easily) perform a tagmentation reaction in 2 ul (5 ul final volume after PCR). Your kit will last 20 times longer and your primers...even 100 times longer! I am currently using this strategy with the 384-index kit from Illumina, where I buy the 4 set of 96 primers each, dilute them and put them in a 384-well "index plate", ready to use on our liquid handling robot.
              Hi Simone,

              You mentioned scaling down the tagmentation reaction to 2ul, then having a 5ul final volume after PCR: do you use your own Tn5 for this, and is 5ul the PCR volume (I've never done a PCR with such low volume before)?

              Thanks in advance.

              Comment


              • Originally posted by daniel007 View Post
                Hi Simone,

                You mentioned scaling down the tagmentation reaction to 2ul, then having a 5ul final volume after PCR: do you use your own Tn5 for this, and is 5ul the PCR volume (I've never done a PCR with such low volume before)?

                Thanks in advance.
                Hi,
                since I am now working in a Single Cell Core Facility and thus invoicing customers for our services, I can´t use our home-made Tn5 (there is a patent for the application, of course). In my previous post I was talking about the Nextera XT kit. I had to find a way to cut costs, no customer wants to pay 10,000 USD for a 384-well plate! So, I started reducing the reaction volumes and see how it looks
                I have sequenced 7 lanes so far, everything looks good (cluster density, reads that passed filter, etc), I´m just waiting for some data on the library complexity before saying that this reduction gives equally good results as the reaction in standard volumes.
                What I do is the following:
                - tagmentation using 0.5 ul cDNA from preampl + 0.5 ul ATM (Tn5) + 1 ul TD (buffer). tot = 2 ul
                - add 0.5 ul NT
                - add 1 ul of a 1:5 dilution of i5+i7 index primers, + 1.5 ul NPM (master mix).
                Input DNA is 100-250 pg, but I wouldn´t go above 400-500 pg or your libraries will get too long (1 kb) as I experienced in my very first trial.
                /Simone

                Comment


                • Originally posted by eab View Post
                  We would like to know anyone thinks our single-cell transcriptome results may be questionable due to the issue with superscript II that several people have raised. We've been using at least one of the SSII lots that have been questioned. I've attached cDNA traces (after Kapa amp). We are seeing sequence mapping percentages of 60-70% (for highly activated human lymphocytes) and 40-50% (for resting memory human lymphocytes).

                  Do people think there's likely to be a problem in our data? Clearly we're generating a bit of product that doesn't depend on templates coming from cells (see the "no cell" controls at right in the slide). Is there too much of that stuff? What do people think of mapping percentages 40-50% in the data?

                  Thanks very much!
                  Eli
                  Hi Eli,

                  We have found that the effects that we are seeing are quite variable in their severity. With degraded material, we are seeing severe contamination, to the point of 75-85% of reads mapping to our bacterial reference. We are getting a significant amount of recovery from NTCs, which map exclusively to our bacterial reference. However, with intact material, we are only seeing ~2-5% bacterial mapping, so the impact is minimal. You can always try and align to a bacterial reference and see what level of reads are mapping to it. We aren't doing single cell work, so I can't be sure exactly how it is impacting your results, but it has been very difficult for us.
                  Last edited by Steven Abbott; 08-23-2015, 08:06 PM.

                  Comment


                  • Originally posted by Kneu View Post
                    In the recent past I successfully got single cell RNAseq data with the Smartseq2 protocol, but I was using the Superscript III RT enzyme. I was unaware of the decreased template switching capacity, and since reading these posts I am trying to change my RT enzyme for improved cDNA yield. With 5’ bio oligos, and superscript II I did see improved amplification, but I had the same contamination reported earlier (lot # 1685467). So I switched to the recommended Primescript from Clontech. Unfortunately, when my bioanalyzer results came back there was no amplification. Has anyone had recent success with this enzyme? I am trying to figure out what I could have done wrong. Briefly: I performed the oligodt annealing reaction at 72c for 3 mins, transferred to ice. Then set up the RT mix with Primescript, primescript buffer, RNase inhibitor, DTT, Betaine, MgCl2 and TSO. The RT reaction was 90min at 42c, 15min at 70c and then 4c back to ice. The only thing I changed in the Preamp PCR was to increase the ISPCR oligo to [.25um], since it now has the 5’ bio, and I performed 20, 21 and 22 preamp cycles. Even my 100 cells well did not show any amplification, and I have not had trouble with this cell population in the past. Does anyone have ideas of what could have gone wrong? Wishingfly, have you gotten results back with Primescript yet?
                    Thanks in advance!
                    Hi Kneu, since you @me, I will reply here with my two cents with the choice of RTase. Sorry for the delay, I was off from the bench for a vacation.

                    As to the RT efficiency (or if we could say, enzyme activity), in my hand, SuperScript II > ProtoScript II> PrimeScript, while Maxima and SuperScript IV do not work at all. Considering the Invitrogen has not fixed the potential contamination yet, we now mainly use ProtoScript II from NEB.

                    We did have some communication with Invitrogen, and they acknowledged that they have recieved similar complains from other customers in U.S., and now they are investigating the issue. I would encourage all of us in U.S. to contact the customer service in your region and complain your issue, so as to push Invitogen to fix the problem as soon as possible.

                    The Invitrogen also mentioned that the product out of U.S. should be fine, because it comes from a different facility. However, current sales system don't allow for ordering the "non-U.S." version. So we researchers in U.S. have to be patient till they fix the problem.

                    Comment


                    • Originally posted by Simone78 View Post
                      Hi,
                      since I am now working in a Single Cell Core Facility and thus invoicing customers for our services, I can´t use our home-made Tn5 (there is a patent for the application, of course). In my previous post I was talking about the Nextera XT kit. I had to find a way to cut costs, no customer wants to pay 10,000 USD for a 384-well plate! So, I started reducing the reaction volumes and see how it looks
                      I have sequenced 7 lanes so far, everything looks good (cluster density, reads that passed filter, etc), I´m just waiting for some data on the library complexity before saying that this reduction gives equally good results as the reaction in standard volumes.
                      What I do is the following:
                      - tagmentation using 0.5 ul cDNA from preampl + 0.5 ul ATM (Tn5) + 1 ul TD (buffer). tot = 2 ul
                      - add 0.5 ul NT
                      - add 1 ul of a 1:5 dilution of i5+i7 index primers, + 1.5 ul NPM (master mix).
                      Input DNA is 100-250 pg, but I wouldn´t go above 400-500 pg or your libraries will get too long (1 kb) as I experienced in my very first trial.
                      /Simone
                      Hi Simone,
                      That is a very cool idea for penny saver in preparing the sequencing libraries. I am curious if you use the beads from Illumina in pool normalization? If so, do you also scale down to 1/10? How much DNA (in total) do you finally load for sequencing? Thanks a lot!

                      Comment


                      • Originally posted by wishingfly View Post
                        Hi Simone,
                        That is a very cool idea for penny saver in preparing the sequencing libraries. I am curious if you use the beads from Illumina in pool normalization? If so, do you also scale down to 1/10? How much DNA (in total) do you finally load for sequencing? Thanks a lot!
                        Hi,
                        I don´t use the beads from Illumina. What I do is to pool all the samples in a 2 ml tube after enrichment PCR, mix really really well, take an aliquots (let´s say 100 ul) and do a bead purification with SeraMag speed beads (or Ampure) for just that aliquot. This saves me A LOT of time and a lot of money.
                        I then Qubit the purified library and measure the size on the Bioanalyzer.
                        My libraries are a bit too long, usually between 600 and 1000 bp (the reason why they are too long is also too long to explain here, but I am now working to make them shorter). Last week we loaded 12-13 pM on our HiSeq2000 and got out 270-290M reads/lane. Preliminary data analysis showed that the quality was good, but right now I don´t know anything about the coverage across the entire length of the transcripts. We do SE 50 bp seq and with so long fragments we might have a problem.
                        /Simone

                        Comment


                        • Originally posted by Simone78 View Post
                          Hi,
                          I don´t use the beads from Illumina. What I do is to pool all the samples in a 2 ml tube after enrichment PCR, mix really really well, take an aliquots (let´s say 100 ul) and do a bead purification with SeraMag speed beads (or Ampure) for just that aliquot. This saves me A LOT of time and a lot of money.
                          I then Qubit the purified library and measure the size on the Bioanalyzer.
                          My libraries are a bit too long, usually between 600 and 1000 bp (the reason why they are too long is also too long to explain here, but I am now working to make them shorter). Last week we loaded 12-13 pM on our HiSeq2000 and got out 270-290M reads/lane. Preliminary data analysis showed that the quality was good, but right now I don´t know anything about the coverage across the entire length of the transcripts. We do SE 50 bp seq and with so long fragments we might have a problem.
                          /Simone
                          Thanks a lot to share the tips with us. However, I do not get it quite well; do you assume the same amount of fragments from each individual library will get into the "SeraMag speed beads" equally?

                          As to the save, I can see your procedure is more convinient than the Illumina protocol, but how could it save money? I thought the Nextera XT kit includes the beads for normalization; or you mean time is money?

                          Comment


                          • Originally posted by wishingfly View Post
                            Thanks a lot to share the tips with us. However, I do not get it quite well; do you assume the same amount of fragments from each individual library will get into the "SeraMag speed beads" equally?

                            As to the save, I can see your procedure is more convinient than the Illumina protocol, but how could it save money? I thought the Nextera XT kit includes the beads for normalization; or you mean time is money?
                            sorry, I wrote it yesterday evening, I might have been very tired
                            Unfortunately, all the samples from a plate are not equally represented in the final pool. My protocol is not perfect and I have to compromise a bit if I want to higher throughput. Therefore I do the following:
                            - I ran a HS chip after preampl. I look at my samples and see that everything is ok (if less than half of the samples are good, I don´t continue). I then calculate the AVERAGE conc of those samples and use it as input for tagmentation. Of course, sometimes the yield is very different between the cells.
                            - I then take the SAME volume from each well (for a 2 ul rxn I wouldn´t go above 200-300 pg, since I am using only 0.5 ul Tn5) and do the tagmentation. Again, I might have much more DNA for some cells and they will be "under-tagmented" or I might have much less and get very little DNA out after the enrichment PCR.
                            - After the enrichment PCR (final vol = 5 ul) I pool everything in a tube, mix and take an aliquot, let´s say 100 ul.
                            - I then use only 100 ul SeraMag beads and purifiy only that aliquot. That´s why the purification is faster and cheaper.

                            I save money in several ways:
                            - by reducing the volume for the tagmentation (from 20 to 2 ul)
                            - by reducing the amount of SeraMag/Ampure beads (I don´t purify the whole plate. It would be quite expensive if I would have to purify 384 sample for every plate I process).

                            Problems with this approach:
                            - some samples will end up with too few reads and will be discarded.

                            Hope is clear now!
                            /Simone

                            Comment


                            • Originally posted by Simone78 View Post
                              sorry, I wrote it yesterday evening, I might have been very tired
                              Unfortunately, all the samples from a plate are not equally represented in the final pool. My protocol is not perfect and I have to compromise a bit if I want to higher throughput. Therefore I do the following:
                              - I ran a HS chip after preampl. I look at my samples and see that everything is ok (if less than half of the samples are good, I don´t continue). I then calculate the AVERAGE conc of those samples and use it as input for tagmentation. Of course, sometimes the yield is very different between the cells.
                              - I then take the SAME volume from each well (for a 2 ul rxn I wouldn´t go above 200-300 pg, since I am using only 0.5 ul Tn5) and do the tagmentation. Again, I might have much more DNA for some cells and they will be "under-tagmented" or I might have much less and get very little DNA out after the enrichment PCR.
                              - After the enrichment PCR (final vol = 5 ul) I pool everything in a tube, mix and take an aliquot, let´s say 100 ul.
                              - I then use only 100 ul SeraMag beads and purifiy only that aliquot. That´s why the purification is faster and cheaper.

                              I save money in several ways:
                              - by reducing the volume for the tagmentation (from 20 to 2 ul)
                              - by reducing the amount of SeraMag/Ampure beads (I don´t purify the whole plate. It would be quite expensive if I would have to purify 384 sample for every plate I process).

                              Problems with this approach:
                              - some samples will end up with too few reads and will be discarded.

                              Hope is clear now!
                              /Simone
                              Dear Simone78,
                              Thank you for your invaluable helps and information.
                              I was wondering to ask you about picking up adherent single cells for RNAseq using "FACS in a Petri".
                              Could you please kindly send me an email in order that I can be in contact with you, because I cannot find your contact information.
                              I am working at Karolinska Instituttet.
                              I am looking forward to hearing from you,
                              Many thanks in advance

                              Comment


                              • Originally posted by Simone78 View Post
                                sorry, I wrote it yesterday evening, I might have been very tired
                                Unfortunately, all the samples from a plate are not equally represented in the final pool. My protocol is not perfect and I have to compromise a bit if I want to higher throughput. Therefore I do the following:
                                - I ran a HS chip after preampl. I look at my samples and see that everything is ok (if less than half of the samples are good, I don´t continue). I then calculate the AVERAGE conc of those samples and use it as input for tagmentation. Of course, sometimes the yield is very different between the cells.
                                - I then take the SAME volume from each well (for a 2 ul rxn I wouldn´t go above 200-300 pg, since I am using only 0.5 ul Tn5) and do the tagmentation. Again, I might have much more DNA for some cells and they will be "under-tagmented" or I might have much less and get very little DNA out after the enrichment PCR.
                                - After the enrichment PCR (final vol = 5 ul) I pool everything in a tube, mix and take an aliquot, let´s say 100 ul.
                                - I then use only 100 ul SeraMag beads and purifiy only that aliquot. That´s why the purification is faster and cheaper.

                                I save money in several ways:
                                - by reducing the volume for the tagmentation (from 20 to 2 ul)
                                - by reducing the amount of SeraMag/Ampure beads (I don´t purify the whole plate. It would be quite expensive if I would have to purify 384 sample for every plate I process).

                                Problems with this approach:
                                - some samples will end up with too few reads and will be discarded.

                                Hope is clear now!
                                /Simone
                                Thank you so much for your detailed explaination! I am also considering some compromised procedure to save reagents, and your input is so valuable.

                                Comment

                                Latest Articles

                                Collapse

                                • seqadmin
                                  Techniques and Challenges in Conservation Genomics
                                  by seqadmin



                                  The field of conservation genomics centers on applying genomics technologies in support of conservation efforts and the preservation of biodiversity. This article features interviews with two researchers who showcase their innovative work and highlight the current state and future of conservation genomics.

                                  Avian Conservation
                                  Matthew DeSaix, a recent doctoral graduate from Kristen Ruegg’s lab at The University of Colorado, shared that most of his research...
                                  03-08-2024, 10:41 AM
                                • seqadmin
                                  The Impact of AI in Genomic Medicine
                                  by seqadmin



                                  Artificial intelligence (AI) has evolved from a futuristic vision to a mainstream technology, highlighted by the introduction of tools like OpenAI's ChatGPT and Google's Gemini. In recent years, AI has become increasingly integrated into the field of genomics. This integration has enabled new scientific discoveries while simultaneously raising important ethical questions1. Interviews with two researchers at the center of this intersection provide insightful perspectives into...
                                  02-26-2024, 02:07 PM

                                ad_right_rmr

                                Collapse

                                News

                                Collapse

                                Topics Statistics Last Post
                                Started by seqadmin, 03-14-2024, 06:13 AM
                                0 responses
                                34 views
                                0 likes
                                Last Post seqadmin  
                                Started by seqadmin, 03-08-2024, 08:03 AM
                                0 responses
                                72 views
                                0 likes
                                Last Post seqadmin  
                                Started by seqadmin, 03-07-2024, 08:13 AM
                                0 responses
                                81 views
                                0 likes
                                Last Post seqadmin  
                                Started by seqadmin, 03-06-2024, 09:51 AM
                                0 responses
                                68 views
                                0 likes
                                Last Post seqadmin  
                                Working...
                                X