SEQanswers

Go Back   SEQanswers > Applications Forums > Sample Prep / Library Generation



Similar Threads
Thread Thread Starter Forum Replies Last Post
RNA-Seq: HITS-CLIP: panoramic views of protein-RNA regulation in living cells. Newsbot! Literature Watch 1 09-10-2015 11:48 PM
RNA-Seq: RNA-sequence analysis of human B-cells. Newsbot! Literature Watch 0 05-04-2011 02:50 AM
RNA-Seq: RNA-Seq Analysis of Sulfur-Deprived Chlamydomonas Cells Reveals Aspects of A Newsbot! Literature Watch 0 07-01-2010 02:40 AM
RNA-Seq: Transcriptome and targetome analysis in MIR155 expressing cells using RNA-se Newsbot! Literature Watch 0 06-30-2010 02:00 AM
RNA seq libraries from few hundred cells artem.men Sample Prep / Library Generation 2 03-02-2010 04:04 PM

Reply
 
Thread Tools
Old 08-19-2015, 12:32 PM   #141
daniel007
Junior Member
 
Location: London, UK

Join Date: Feb 2015
Posts: 5
Default

Quote:
Originally Posted by Simone78 View Post
The original Nextera kit from Epicentre was reporting the conc of the oligos. In that kit they were using the i5 + i7 pairs (0.5 uM final) and other 2, the "PPC" mix (PCR Primer cocktail, which I think Illumina still has in the Nextera kit for input up to 50 ng). The PPC had a conc 20 times higher, that is 10 uM)
Briefly I can tell you that:
- I took the oligos from the Illumina website, ordered them from another vendor and used in the same ratio 20:1. The Illumina oligos have a phosphorothioate bond between the last 2 nucleotides at the 3īend (to make them resistant to nucleases) but I think they also have some kind of blocking group at the 5īend. Mine were not blocked but they worked well anyway. However, when tagmenting picogram or sub-picogram inputs of DNA the huge excess of unused primers led to a massive accumulation of dimers that could not removed with the bead purification. I guess that was due to the fact that they were not blcoked. Result: many reads were just coming from the adaptors. A solution would be to titrate the amount of primers to your input DNA.
- If you plan to use the Nextera XT kit (that is: start from <1 ng DNA for tagmentation) you can dilute your adaptors 1:5 (at least) and you wonīt see any difference. In this way the index kit becomes very affordable and you donīt have to care about dimers in your prep. If you, in parallel, also scale down the volume of your tagmentation reaction (20 ul for a Nextera XT kit is a huge waste!), the amount of index primers decreases even more. Even without liquid handling robots you can easily (easily) perform a tagmentation reaction in 2 ul (5 ul final volume after PCR). Your kit will last 20 times longer and your primers...even 100 times longer! I am currently using this strategy with the 384-index kit from Illumina, where I buy the 4 set of 96 primers each, dilute them and put them in a 384-well "index plate", ready to use on our liquid handling robot.
Hi Simone,

You mentioned scaling down the tagmentation reaction to 2ul, then having a 5ul final volume after PCR: do you use your own Tn5 for this, and is 5ul the PCR volume (I've never done a PCR with such low volume before)?

Thanks in advance.
daniel007 is offline   Reply With Quote
Old 08-19-2015, 10:21 PM   #142
Simone78
Senior Member
 
Location: Basel (Switzerland)

Join Date: Oct 2010
Posts: 208
Default

Quote:
Originally Posted by daniel007 View Post
Hi Simone,

You mentioned scaling down the tagmentation reaction to 2ul, then having a 5ul final volume after PCR: do you use your own Tn5 for this, and is 5ul the PCR volume (I've never done a PCR with such low volume before)?

Thanks in advance.
Hi,
since I am now working in a Single Cell Core Facility and thus invoicing customers for our services, I canīt use our home-made Tn5 (there is a patent for the application, of course). In my previous post I was talking about the Nextera XT kit. I had to find a way to cut costs, no customer wants to pay 10,000 USD for a 384-well plate! So, I started reducing the reaction volumes and see how it looks
I have sequenced 7 lanes so far, everything looks good (cluster density, reads that passed filter, etc), Iīm just waiting for some data on the library complexity before saying that this reduction gives equally good results as the reaction in standard volumes.
What I do is the following:
- tagmentation using 0.5 ul cDNA from preampl + 0.5 ul ATM (Tn5) + 1 ul TD (buffer). tot = 2 ul
- add 0.5 ul NT
- add 1 ul of a 1:5 dilution of i5+i7 index primers, + 1.5 ul NPM (master mix).
Input DNA is 100-250 pg, but I wouldnīt go above 400-500 pg or your libraries will get too long (1 kb) as I experienced in my very first trial.
/Simone
Simone78 is offline   Reply With Quote
Old 08-23-2015, 08:03 PM   #143
Steven Abbott
Junior Member
 
Location: RTP

Join Date: Apr 2013
Posts: 3
Default

Quote:
Originally Posted by eab View Post
We would like to know anyone thinks our single-cell transcriptome results may be questionable due to the issue with superscript II that several people have raised. We've been using at least one of the SSII lots that have been questioned. I've attached cDNA traces (after Kapa amp). We are seeing sequence mapping percentages of 60-70% (for highly activated human lymphocytes) and 40-50% (for resting memory human lymphocytes).

Do people think there's likely to be a problem in our data? Clearly we're generating a bit of product that doesn't depend on templates coming from cells (see the "no cell" controls at right in the slide). Is there too much of that stuff? What do people think of mapping percentages 40-50% in the data?

Thanks very much!
Eli
Hi Eli,

We have found that the effects that we are seeing are quite variable in their severity. With degraded material, we are seeing severe contamination, to the point of 75-85% of reads mapping to our bacterial reference. We are getting a significant amount of recovery from NTCs, which map exclusively to our bacterial reference. However, with intact material, we are only seeing ~2-5% bacterial mapping, so the impact is minimal. You can always try and align to a bacterial reference and see what level of reads are mapping to it. We aren't doing single cell work, so I can't be sure exactly how it is impacting your results, but it has been very difficult for us.

Last edited by Steven Abbott; 08-23-2015 at 08:06 PM.
Steven Abbott is offline   Reply With Quote
Old 09-04-2015, 01:20 PM   #144
wishingfly
Member
 
Location: SF

Join Date: May 2015
Posts: 20
Default

Quote:
Originally Posted by Kneu View Post
In the recent past I successfully got single cell RNAseq data with the Smartseq2 protocol, but I was using the Superscript III RT enzyme. I was unaware of the decreased template switching capacity, and since reading these posts I am trying to change my RT enzyme for improved cDNA yield. With 5’ bio oligos, and superscript II I did see improved amplification, but I had the same contamination reported earlier (lot # 1685467). So I switched to the recommended Primescript from Clontech. Unfortunately, when my bioanalyzer results came back there was no amplification. Has anyone had recent success with this enzyme? I am trying to figure out what I could have done wrong. Briefly: I performed the oligodt annealing reaction at 72c for 3 mins, transferred to ice. Then set up the RT mix with Primescript, primescript buffer, RNase inhibitor, DTT, Betaine, MgCl2 and TSO. The RT reaction was 90min at 42c, 15min at 70c and then 4c back to ice. The only thing I changed in the Preamp PCR was to increase the ISPCR oligo to [.25um], since it now has the 5’ bio, and I performed 20, 21 and 22 preamp cycles. Even my 100 cells well did not show any amplification, and I have not had trouble with this cell population in the past. Does anyone have ideas of what could have gone wrong? Wishingfly, have you gotten results back with Primescript yet?
Thanks in advance!
Hi Kneu, since you @me, I will reply here with my two cents with the choice of RTase. Sorry for the delay, I was off from the bench for a vacation.

As to the RT efficiency (or if we could say, enzyme activity), in my hand, SuperScript II > ProtoScript II> PrimeScript, while Maxima and SuperScript IV do not work at all. Considering the Invitrogen has not fixed the potential contamination yet, we now mainly use ProtoScript II from NEB.

We did have some communication with Invitrogen, and they acknowledged that they have recieved similar complains from other customers in U.S., and now they are investigating the issue. I would encourage all of us in U.S. to contact the customer service in your region and complain your issue, so as to push Invitogen to fix the problem as soon as possible.

The Invitrogen also mentioned that the product out of U.S. should be fine, because it comes from a different facility. However, current sales system don't allow for ordering the "non-U.S." version. So we researchers in U.S. have to be patient till they fix the problem.
wishingfly is offline   Reply With Quote
Old 09-04-2015, 01:25 PM   #145
wishingfly
Member
 
Location: SF

Join Date: May 2015
Posts: 20
Default

Quote:
Originally Posted by Simone78 View Post
Hi,
since I am now working in a Single Cell Core Facility and thus invoicing customers for our services, I canīt use our home-made Tn5 (there is a patent for the application, of course). In my previous post I was talking about the Nextera XT kit. I had to find a way to cut costs, no customer wants to pay 10,000 USD for a 384-well plate! So, I started reducing the reaction volumes and see how it looks
I have sequenced 7 lanes so far, everything looks good (cluster density, reads that passed filter, etc), Iīm just waiting for some data on the library complexity before saying that this reduction gives equally good results as the reaction in standard volumes.
What I do is the following:
- tagmentation using 0.5 ul cDNA from preampl + 0.5 ul ATM (Tn5) + 1 ul TD (buffer). tot = 2 ul
- add 0.5 ul NT
- add 1 ul of a 1:5 dilution of i5+i7 index primers, + 1.5 ul NPM (master mix).
Input DNA is 100-250 pg, but I wouldnīt go above 400-500 pg or your libraries will get too long (1 kb) as I experienced in my very first trial.
/Simone
Hi Simone,
That is a very cool idea for penny saver in preparing the sequencing libraries. I am curious if you use the beads from Illumina in pool normalization? If so, do you also scale down to 1/10? How much DNA (in total) do you finally load for sequencing? Thanks a lot!
wishingfly is offline   Reply With Quote
Old 09-04-2015, 01:42 PM   #146
Simone78
Senior Member
 
Location: Basel (Switzerland)

Join Date: Oct 2010
Posts: 208
Default

Quote:
Originally Posted by wishingfly View Post
Hi Simone,
That is a very cool idea for penny saver in preparing the sequencing libraries. I am curious if you use the beads from Illumina in pool normalization? If so, do you also scale down to 1/10? How much DNA (in total) do you finally load for sequencing? Thanks a lot!
Hi,
I donīt use the beads from Illumina. What I do is to pool all the samples in a 2 ml tube after enrichment PCR, mix really really well, take an aliquots (letīs say 100 ul) and do a bead purification with SeraMag speed beads (or Ampure) for just that aliquot. This saves me A LOT of time and a lot of money.
I then Qubit the purified library and measure the size on the Bioanalyzer.
My libraries are a bit too long, usually between 600 and 1000 bp (the reason why they are too long is also too long to explain here, but I am now working to make them shorter). Last week we loaded 12-13 pM on our HiSeq2000 and got out 270-290M reads/lane. Preliminary data analysis showed that the quality was good, but right now I donīt know anything about the coverage across the entire length of the transcripts. We do SE 50 bp seq and with so long fragments we might have a problem.
/Simone
Simone78 is offline   Reply With Quote
Old 09-04-2015, 02:18 PM   #147
wishingfly
Member
 
Location: SF

Join Date: May 2015
Posts: 20
Default

Quote:
Originally Posted by Simone78 View Post
Hi,
I donīt use the beads from Illumina. What I do is to pool all the samples in a 2 ml tube after enrichment PCR, mix really really well, take an aliquots (letīs say 100 ul) and do a bead purification with SeraMag speed beads (or Ampure) for just that aliquot. This saves me A LOT of time and a lot of money.
I then Qubit the purified library and measure the size on the Bioanalyzer.
My libraries are a bit too long, usually between 600 and 1000 bp (the reason why they are too long is also too long to explain here, but I am now working to make them shorter). Last week we loaded 12-13 pM on our HiSeq2000 and got out 270-290M reads/lane. Preliminary data analysis showed that the quality was good, but right now I donīt know anything about the coverage across the entire length of the transcripts. We do SE 50 bp seq and with so long fragments we might have a problem.
/Simone
Thanks a lot to share the tips with us. However, I do not get it quite well; do you assume the same amount of fragments from each individual library will get into the "SeraMag speed beads" equally?

As to the save, I can see your procedure is more convinient than the Illumina protocol, but how could it save money? I thought the Nextera XT kit includes the beads for normalization; or you mean time is money?
wishingfly is offline   Reply With Quote
Old 09-04-2015, 11:27 PM   #148
Simone78
Senior Member
 
Location: Basel (Switzerland)

Join Date: Oct 2010
Posts: 208
Default

Quote:
Originally Posted by wishingfly View Post
Thanks a lot to share the tips with us. However, I do not get it quite well; do you assume the same amount of fragments from each individual library will get into the "SeraMag speed beads" equally?

As to the save, I can see your procedure is more convinient than the Illumina protocol, but how could it save money? I thought the Nextera XT kit includes the beads for normalization; or you mean time is money?
sorry, I wrote it yesterday evening, I might have been very tired
Unfortunately, all the samples from a plate are not equally represented in the final pool. My protocol is not perfect and I have to compromise a bit if I want to higher throughput. Therefore I do the following:
- I ran a HS chip after preampl. I look at my samples and see that everything is ok (if less than half of the samples are good, I donīt continue). I then calculate the AVERAGE conc of those samples and use it as input for tagmentation. Of course, sometimes the yield is very different between the cells.
- I then take the SAME volume from each well (for a 2 ul rxn I wouldnīt go above 200-300 pg, since I am using only 0.5 ul Tn5) and do the tagmentation. Again, I might have much more DNA for some cells and they will be "under-tagmented" or I might have much less and get very little DNA out after the enrichment PCR.
- After the enrichment PCR (final vol = 5 ul) I pool everything in a tube, mix and take an aliquot, letīs say 100 ul.
- I then use only 100 ul SeraMag beads and purifiy only that aliquot. Thatīs why the purification is faster and cheaper.

I save money in several ways:
- by reducing the volume for the tagmentation (from 20 to 2 ul)
- by reducing the amount of SeraMag/Ampure beads (I donīt purify the whole plate. It would be quite expensive if I would have to purify 384 sample for every plate I process).

Problems with this approach:
- some samples will end up with too few reads and will be discarded.

Hope is clear now!
/Simone
Simone78 is offline   Reply With Quote
Old 09-07-2015, 03:30 AM   #149
immpdaf
Junior Member
 
Location: Stockholm

Join Date: Sep 2015
Posts: 4
Default

Quote:
Originally Posted by Simone78 View Post
sorry, I wrote it yesterday evening, I might have been very tired
Unfortunately, all the samples from a plate are not equally represented in the final pool. My protocol is not perfect and I have to compromise a bit if I want to higher throughput. Therefore I do the following:
- I ran a HS chip after preampl. I look at my samples and see that everything is ok (if less than half of the samples are good, I donīt continue). I then calculate the AVERAGE conc of those samples and use it as input for tagmentation. Of course, sometimes the yield is very different between the cells.
- I then take the SAME volume from each well (for a 2 ul rxn I wouldnīt go above 200-300 pg, since I am using only 0.5 ul Tn5) and do the tagmentation. Again, I might have much more DNA for some cells and they will be "under-tagmented" or I might have much less and get very little DNA out after the enrichment PCR.
- After the enrichment PCR (final vol = 5 ul) I pool everything in a tube, mix and take an aliquot, letīs say 100 ul.
- I then use only 100 ul SeraMag beads and purifiy only that aliquot. Thatīs why the purification is faster and cheaper.

I save money in several ways:
- by reducing the volume for the tagmentation (from 20 to 2 ul)
- by reducing the amount of SeraMag/Ampure beads (I donīt purify the whole plate. It would be quite expensive if I would have to purify 384 sample for every plate I process).

Problems with this approach:
- some samples will end up with too few reads and will be discarded.

Hope is clear now!
/Simone
Dear Simone78,
Thank you for your invaluable helps and information.
I was wondering to ask you about picking up adherent single cells for RNAseq using "FACS in a Petri".
Could you please kindly send me an email in order that I can be in contact with you, because I cannot find your contact information.
I am working at Karolinska Instituttet.
I am looking forward to hearing from you,
Many thanks in advance
immpdaf is offline   Reply With Quote
Old 09-08-2015, 12:52 PM   #150
wishingfly
Member
 
Location: SF

Join Date: May 2015
Posts: 20
Default

Quote:
Originally Posted by Simone78 View Post
sorry, I wrote it yesterday evening, I might have been very tired
Unfortunately, all the samples from a plate are not equally represented in the final pool. My protocol is not perfect and I have to compromise a bit if I want to higher throughput. Therefore I do the following:
- I ran a HS chip after preampl. I look at my samples and see that everything is ok (if less than half of the samples are good, I donīt continue). I then calculate the AVERAGE conc of those samples and use it as input for tagmentation. Of course, sometimes the yield is very different between the cells.
- I then take the SAME volume from each well (for a 2 ul rxn I wouldnīt go above 200-300 pg, since I am using only 0.5 ul Tn5) and do the tagmentation. Again, I might have much more DNA for some cells and they will be "under-tagmented" or I might have much less and get very little DNA out after the enrichment PCR.
- After the enrichment PCR (final vol = 5 ul) I pool everything in a tube, mix and take an aliquot, letīs say 100 ul.
- I then use only 100 ul SeraMag beads and purifiy only that aliquot. Thatīs why the purification is faster and cheaper.

I save money in several ways:
- by reducing the volume for the tagmentation (from 20 to 2 ul)
- by reducing the amount of SeraMag/Ampure beads (I donīt purify the whole plate. It would be quite expensive if I would have to purify 384 sample for every plate I process).

Problems with this approach:
- some samples will end up with too few reads and will be discarded.

Hope is clear now!
/Simone
Thank you so much for your detailed explaination! I am also considering some compromised procedure to save reagents, and your input is so valuable.
wishingfly is offline   Reply With Quote
Old 09-09-2015, 07:27 AM   #151
SunPenguin
Member
 
Location: Boston

Join Date: Aug 2015
Posts: 38
Default

Quote:
Originally Posted by Simone78 View Post
Actually not. I tried the opposite, to reduce time for the RT to 15 min, the time that is now recommended for the new Superscript IV (and because Iīm tired of waiting 1.5 hours!). Result: the yield was lower and the size slightly lower but not so much, considering is a 80% reduction. However, in my case, concatamers were not visible in any case. You could try and see if this makes things better.
/Simone
Thanks! I tried out the 5' biotin TSO, and indeeds it got rid of the concatemer issues I was having with the low input RNA.

I'm trying hard to coerce as much efficiency and yield out of the protocol as possible, as I'm doing RT on a very low abundance transcript. Looking around different TSO protocols, it seems people have slightly different approach to the cycling condition of the TSO reaction: could you comment on the 72 degree annealing and the subsequent cooling step? some protocol go directly from the annealing to cooling to 42 degree, while the SMART-seq2 protocol specfiically says to cool it on ice. Have you seen what the difference is in terms of yield? How about the length of template annealing step? Do you think increase the time of primer annealing could help with low abundance transcript?
SunPenguin is offline   Reply With Quote
Old 09-10-2015, 11:45 PM   #152
Simone78
Senior Member
 
Location: Basel (Switzerland)

Join Date: Oct 2010
Posts: 208
Default

Quote:
Originally Posted by SunPenguin View Post
Thanks! I tried out the 5' biotin TSO, and indeeds it got rid of the concatemer issues I was having with the low input RNA.

I'm trying hard to coerce as much efficiency and yield out of the protocol as possible, as I'm doing RT on a very low abundance transcript. Looking around different TSO protocols, it seems people have slightly different approach to the cycling condition of the TSO reaction: could you comment on the 72 degree annealing and the subsequent cooling step? some protocol go directly from the annealing to cooling to 42 degree, while the SMART-seq2 protocol specfiically says to cool it on ice. Have you seen what the difference is in terms of yield? How about the length of template annealing step? Do you think increase the time of primer annealing could help with low abundance transcript?
Hi,
interesting questions. I can tell you that I tried the following:
-skipped the 3 min @ 72 deg. --> got much less cDNA after RT and PCR. So, apparently this step is important to resolve 2ndary structure in the mRNA.
- performed the denat at 65 (like in the SSRTIII protocol) or 72 for 2 or 3 min. No difference. Never tried to incubate for a longer time because the longer the RNA is at high T the higher the chance/amount it gets degraded (I think).
- you canīt add all the reagents and go directly 72--> 42 degrees. The SSRTII is T sensitive and gets inactivated at T>60 degrees.
Please let me know if you have additional questions, Iīll be happy to help!
/Simone
Simone78 is offline   Reply With Quote
Old 09-18-2015, 01:50 PM   #153
Luka
Member
 
Location: Cambridge

Join Date: Nov 2012
Posts: 10
Default

Hi everyone,

I've notice that lately there has been an extensive discussion on which RTs are best for template switching and cDNA generation from a single cell. I can only say that in my hands SSII was also usually giving the best results keeping in mind that I very much do follow Simone's protocol combining it with some protocols from Regev's Lab . I also have a rather good experience with Maxima H Minus RT and I know several other labs that are using it for single-cell RNA-seq.

I would be interested to know if anyone has tired using TGIRT III? From the literature these RTs appear to have higher processivity and fidelity than retroviral reverse transcriptases, and from the published papers it seems quite impressive when compared to SSIII. Most interestingly, it seems to be more efficient in template-switching, even if the comparison that has been done so far might not be the most optimal for SSIII. I was just curios if anyone has any experience with this enzyme or any knowledge about it when it comes to RNA-seq.

Cheers
Luka is offline   Reply With Quote
Old 09-20-2015, 12:36 AM   #154
Simone78
Senior Member
 
Location: Basel (Switzerland)

Join Date: Oct 2010
Posts: 208
Default

Quote:
Originally Posted by Luka View Post

I would be interested to know if anyone has tired using TGIRT III? From the literature these RTs appear to have higher processivity and fidelity than retroviral reverse transcriptases, and from the published papers it seems quite impressive when compared to SSIII. Most interestingly, it seems to be more efficient in template-switching, even if the comparison that has been done so far might not be the most optimal for SSIII. I was just curios if anyone has any experience with this enzyme or any knowledge about it when it comes to RNA-seq.

Cheers
TGIRT III is the commercial name for a new group of enzymes called group II intron reverse transcriptases. I got an aliquot of 3 variants directly from the Lambowitz group and tried it on single cells. I was surprised about the average size of the libraries, 4-5 kb vs 1.5-2 kb with Smart-seq2 protocol! Strand switch reaction works and the processivity of the enzymes is impressive (reaction was only 15 min). That said, it never worked with input below 100 ng. The problem is that TGIRT III bind so tightly to the DNA after RT that you need to treat it with NaOH/HCl or SDS before PCR. If you donīt do any column purification or dilute the RT rxn a lot afterwards, the PCR wonīt work because there is too much salt. Doing any purification before PCR is not advisable, but itīs probably the only way to go. I am not sure about this TGIRT III, maybe now it works for much lower inputs than what I used. It looks very expensive, though...
/Simone
Simone78 is offline   Reply With Quote
Old 09-21-2015, 12:34 PM   #155
SunPenguin
Member
 
Location: Boston

Join Date: Aug 2015
Posts: 38
Default

Quote:
Originally Posted by Simone78 View Post
Hi,
interesting questions. I can tell you that I tried the following:
-skipped the 3 min @ 72 deg. --> got much less cDNA after RT and PCR. So, apparently this step is important to resolve 2ndary structure in the mRNA.
- performed the denat at 65 (like in the SSRTIII protocol) or 72 for 2 or 3 min. No difference. Never tried to incubate for a longer time because the longer the RNA is at high T the higher the chance/amount it gets degraded (I think).
- you canīt add all the reagents and go directly 72--> 42 degrees. The SSRTII is T sensitive and gets inactivated at T>60 degrees.
Please let me know if you have additional questions, Iīll be happy to help!
/Simone
Hey Simone,

Thanks! That's definitely helpful. The some of the protocols I see ask for the RNA template/ primer to be denature at 72, then cool down to 42, then add the rest of the reaction mix (with the enzyme) as quickly as possible while the whole mix is still hovering around 42 degrees. It does all seem to not make that much difference as long as the enzyme is not getting inactivated though.
SunPenguin is offline   Reply With Quote
Old 09-22-2015, 05:38 AM   #156
Luka
Member
 
Location: Cambridge

Join Date: Nov 2012
Posts: 10
Default

Quote:
Originally Posted by Simone78 View Post
TGIRT III is the commercial name for a new group of enzymes called group II intron reverse transcriptases. I got an aliquot of 3 variants directly from the Lambowitz group and tried it on single cells. I was surprised about the average size of the libraries, 4-5 kb vs 1.5-2 kb with Smart-seq2 protocol! Strand switch reaction works and the processivity of the enzymes is impressive (reaction was only 15 min). That said, it never worked with input below 100 ng. The problem is that TGIRT III bind so tightly to the DNA after RT that you need to treat it with NaOH/HCl or SDS before PCR. If you donīt do any column purification or dilute the RT rxn a lot afterwards, the PCR wonīt work because there is too much salt. Doing any purification before PCR is not advisable, but itīs probably the only way to go. I am not sure about this TGIRT III, maybe now it works for much lower inputs than what I used. It looks very expensive, though...
/Simone
Hi Simone,
Glad to see that you already have tested it. The one you got from Lambowitz is the same as the commercial one and I agree it is pricey. I was about to do some tests myself as well as we got some aliquots of the enzyme. I was very impressed when reading the papers how processive is this enzyme. A shift from 1.5-2 kb to 4-5 kb does make me think if at least for some RNA-seq libraries this enzyme should be used. I do single-cell RNA-seq but I frequently run RNA-seq on few hundred cells, never more than 500 so it seems almost perfect for those samples. I presume you have done your tests on purified RNA? You are saying that there is no way to make it work on single cells, but for higher than single cell amounts of RNA it might works as long as one strips the enzyme from DNA. It seems that you have just used the same approach as you did for Tn5, otherwise it just doesn't work?
Did you get to sequence any of those test libraries, I would also like to understand if a 1.5-2 kb and 4-5 kb library will differ substantially after sequencing? Would you recommend this enzyme for library preps in some particular cases or SSII is good enough?

Thanks a lot, it's always great to talk to you and learn more about RNA-seq.
Luka is offline   Reply With Quote
Old 09-22-2015, 06:04 AM   #157
Simone78
Senior Member
 
Location: Basel (Switzerland)

Join Date: Oct 2010
Posts: 208
Default

Quote:
Originally Posted by Luka View Post
Hi Simone,
Glad to see that you already have tested it. The one you got from Lambowitz is the same as the commercial one and I agree it is pricey. I was about to do some tests myself as well as we got some aliquots of the enzyme. I was very impressed when reading the papers how processive is this enzyme. A shift from 1.5-2 kb to 4-5 kb does make me think if at least for some RNA-seq libraries this enzyme should be used. I do single-cell RNA-seq but I frequently run RNA-seq on few hundred cells, never more than 500 so it seems almost perfect for those samples. I presume you have done your tests on purified RNA? You are saying that there is no way to make it work on single cells, but for higher than single cell amounts of RNA it might works as long as one strips the enzyme from DNA. It seems that you have just used the same approach as you did for Tn5, otherwise it just doesn't work?
Did you get to sequence any of those test libraries, I would also like to understand if a 1.5-2 kb and 4-5 kb library will differ substantially after sequencing? Would you recommend this enzyme for library preps in some particular cases or SSII is good enough?

Thanks a lot, it's always great to talk to you and learn more about RNA-seq.
Hi,
exactly, I just tried it on RNA, never on cells (and never sequenced anything, unfortunately). Since the input was still so much higher than what I wanted, I never pursued the experiments further. It might have worked eventually but I didnīt have the time back then (we were finalising the Tn5 paper in those days). I think it is worth trying it anyway with few hundreds cells, it will be very interesting to see what people get when using it! And I believe that, sooner or later, someone will find a way to make it working on single cells too. Hopefully we wonīt get stuck with inefficient retroviral RT forever!

As they say on the website, the enzyme is very sticky and you really need to go through the NaOH-HCL-precipitation/column procedure or the excess of salt will inhibit your PCR. The same approach used for Tn5 (SDS) didnīt really work, if I remember correctly (should go back to my notes, though).
Good luck! I would be really interesting in knowing how it is working for you, if you are willing to share some information
/Simone
Simone78 is offline   Reply With Quote
Old 10-02-2015, 12:53 AM   #158
JJMS
Junior Member
 
Location: Netherlands

Join Date: Jan 2013
Posts: 5
Default Clontech SMARTscribe

Quote:
Originally Posted by wishingfly View Post
Hi Kneu, since you @me, I will reply here with my two cents with the choice of RTase. Sorry for the delay, I was off from the bench for a vacation.

As to the RT efficiency (or if we could say, enzyme activity), in my hand, SuperScript II > ProtoScript II> PrimeScript, while Maxima and SuperScript IV do not work at all. Considering the Invitrogen has not fixed the potential contamination yet, we now mainly use ProtoScript II from NEB.

We did have some communication with Invitrogen, and they acknowledged that they have recieved similar complains from other customers in U.S., and now they are investigating the issue. I would encourage all of us in U.S. to contact the customer service in your region and complain your issue, so as to push Invitogen to fix the problem as soon as possible.

The Invitrogen also mentioned that the product out of U.S. should be fine, because it comes from a different facility. However, current sales system don't allow for ordering the "non-U.S." version. So we researchers in U.S. have to be patient till they fix the problem.
Hello everybody and especially Wishingfly,

I followed your advice on complaining to Life Technologies about the SSII contamination. Their European facility also distributes contaminated Superscript II as I found out in my results. Their reply seemed to me a little arrogant (or ignorant) as they said they have had no other complaints regarding this issue ("I’m very sorry to hear that there seems to be a contamination in the recent batches #1688365 and #1688366 and I will look into this in detail. So far, we have received no other complaints about the concerned lots regarding a contamination now any other issues."). They would get back to me as soon as possible... That was written on 07-09.

As we are still waiting for a solution, in the meantime we already succesfully switched to ClonTech's SMARTscribe (cat: 639536). Even with 21 Amp cycles the profiles of the negative controls (0 cells) are looking nice and clean. It might help other users as well. Also a big High-Five to Simone for supplying this protocol and all your help and tips, Thanks!
Attached Files
File Type: pdf SMARTSEq2 Forum.pdf (413.2 KB, 102 views)
JJMS is offline   Reply With Quote
Old 10-02-2015, 04:39 AM   #159
Simone78
Senior Member
 
Location: Basel (Switzerland)

Join Date: Oct 2010
Posts: 208
Default

Quote:
Originally Posted by JJMS View Post
Hello everybody and especially Wishingfly,

I followed your advice on complaining to Life Technologies about the SSII contamination. Their European facility also distributes contaminated Superscript II as I found out in my results. Their reply seemed to me a little arrogant (or ignorant) as they said they have had no other complaints regarding this issue ("I’m very sorry to hear that there seems to be a contamination in the recent batches #1688365 and #1688366 and I will look into this in detail. So far, we have received no other complaints about the concerned lots regarding a contamination now any other issues."). They would get back to me as soon as possible... That was written on 07-09.

As we are still waiting for a solution, in the meantime we already succesfully switched to ClonTech's SMARTscribe (cat: 639536). Even with 21 Amp cycles the profiles of the negative controls (0 cells) are looking nice and clean. It might help other users as well. Also a big High-Five to Simone for supplying this protocol and all your help and tips, Thanks!
I am still convinced that SMARTScribe is repackaged Superscript II since in my hands they perform exactly the same
good that it works for you as well!
Btw, I also talked to Life Tech and they also said to me that they had no complaints from other customers (that was back in July or August, I think), but since our SSII is fine I didnīt pursue it further. They probably tell the same thing to everybody, clever guys
Simone78 is offline   Reply With Quote
Old 10-05-2015, 07:47 PM   #160
kobeho24
Member
 
Location: HKUST, Hong Kong

Join Date: Apr 2015
Posts: 32
Smile

Quote:
Originally Posted by Simone78 View Post
Hi,
exactly, I just tried it on RNA, never on cells (and never sequenced anything, unfortunately). Since the input was still so much higher than what I wanted, I never pursued the experiments further. It might have worked eventually but I didnīt have the time back then (we were finalising the Tn5 paper in those days). I think it is worth trying it anyway with few hundreds cells, it will be very interesting to see what people get when using it! And I believe that, sooner or later, someone will find a way to make it working on single cells too. Hopefully we wonīt get stuck with inefficient retroviral RT forever!

As they say on the website, the enzyme is very sticky and you really need to go through the NaOH-HCL-precipitation/column procedure or the excess of salt will inhibit your PCR. The same approach used for Tn5 (SDS) didnīt really work, if I remember correctly (should go back to my notes, though).
Good luck! I would be really interesting in knowing how it is working for you, if you are willing to share some information
/Simone
Hi Simone,
Glad to know that you have already tried the TGIRT on your hand. I wonder if 0.2% SDS can strip the sticky enzyme off the DNA efficiently. And have you tried SDS along with high temperature incubation and also RNase H/ RNase A treatment? From the bioanalyzer, things seem much better than the retroviral RT. Hope to see its application in single-cell RNA-seq.

Gary
kobeho24 is offline   Reply With Quote
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off




All times are GMT -8. The time now is 07:47 AM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2021, vBulletin Solutions, Inc.
Single Sign On provided by vBSSO