SEQanswers

Go Back   SEQanswers > Applications Forums > Sample Prep / Library Generation



Similar Threads
Thread Thread Starter Forum Replies Last Post
RNA-Seq: HITS-CLIP: panoramic views of protein-RNA regulation in living cells. Newsbot! Literature Watch 1 09-11-2015 12:48 AM
RNA-Seq: RNA-sequence analysis of human B-cells. Newsbot! Literature Watch 0 05-04-2011 03:50 AM
RNA-Seq: RNA-Seq Analysis of Sulfur-Deprived Chlamydomonas Cells Reveals Aspects of A Newsbot! Literature Watch 0 07-01-2010 03:40 AM
RNA-Seq: Transcriptome and targetome analysis in MIR155 expressing cells using RNA-se Newsbot! Literature Watch 0 06-30-2010 03:00 AM
RNA seq libraries from few hundred cells artem.men Sample Prep / Library Generation 2 03-02-2010 05:04 PM

Reply
 
Thread Tools
Old 07-16-2015, 02:25 PM   #121
Simone78
Senior Member
 
Location: Basel (Switzerland)

Join Date: Oct 2010
Posts: 208
Default

Quote:
Originally Posted by Kneu View Post
Simone,
First of all, thank you for having such an active role responding to questions on this forum, it is really great! Second, based on all the posted issues with superscript II and your comments on primescript, I am also interested in changing to this enzyme (Thank you wishingfly for posting the link). In your post above you said "performed the RT for 90 mins", does that mean that you did not include the PCR steps at 50 degrees when using Primescript? So instead you would perform the oligo dt annealing rxn at 72, assemble the RT reaction with all the same components except the new enzyme and buffer, and run a 42 degree PCR for 90 mins followed by 15min at 70 degrees. Does that sound correct?
Thank you in advance for all your help!
exactly, I used the same protocol that I generally use with the Superscript II, 90 min at 42, followed by the inactivation for 15 min at 70 degrees. The PCR is done afterwards with KAPA HiFi.
/Simone
Simone78 is offline   Reply With Quote
Old 07-21-2015, 03:07 PM   #122
RDH
Junior Member
 
Location: Seattle

Join Date: Jul 2015
Posts: 1
Default

Hi Simone,

I am curious if you know of any way to use UMIs with your Smart-seq2 protocol?

Thanks!
RDH is offline   Reply With Quote
Old 07-24-2015, 02:43 AM   #123
Simone78
Senior Member
 
Location: Basel (Switzerland)

Join Date: Oct 2010
Posts: 208
Default

Quote:
Originally Posted by RDH View Post
Hi Simone,

I am curious if you know of any way to use UMIs with your Smart-seq2 protocol?

Thanks!
yes...sorry, not possible to give out further details
It doesn´t look very promising at the moment, at least with the approach I´m following
Simone78 is offline   Reply With Quote
Old 07-24-2015, 04:06 AM   #124
jwfoley
Senior Member
 
Location: Stanford

Join Date: Jun 2009
Posts: 181
Default

Quote:
Originally Posted by RDH View Post
I am curious if you know of any way to use UMIs with your Smart-seq2 protocol?
The problem is that preamplification is done before cDNA fragmentation, so your library has already been through PCR before there are even distinct fragments to label. However, if your goal is quantitative accuracy, you can use UMIs with end-targeted digital gene expression profiling rather than transcriptome resequencing, e.g. doi:10.1038/nmeth.2772. That's a bit off-topic for this thread but I can tell you more by private message if you're interested.
jwfoley is offline   Reply With Quote
Old 07-24-2015, 07:09 AM   #125
Simone78
Senior Member
 
Location: Basel (Switzerland)

Join Date: Oct 2010
Posts: 208
Default

Quote:
Originally Posted by jwfoley View Post
The problem is that preamplification is done before cDNA fragmentation, so your library has already been through PCR before there are even distinct fragments to label. However, if your goal is quantitative accuracy, you can use UMIs with end-targeted digital gene expression profiling rather than transcriptome resequencing, e.g. doi:10.1038/nmeth.2772. That's a bit off-topic for this thread but I can tell you more by private message if you're interested.
it´s obvious that UMIs with the standard Smart-seq2 are useless, unless you want to sequence either only the 3´or the 5´(STRT, CEL-seq and all the variants), but then you can follow those protocols from the beginning.
I wouldn´t just stick some NNNN into my TSO or oligodT and pretend it´s working and that I can count molecules!
Simone78 is offline   Reply With Quote
Old 07-24-2015, 11:35 AM   #126
longwood
Junior Member
 
Location: Massachusetts

Join Date: May 2014
Posts: 5
Default

Quote:
Originally Posted by Simone78 View Post
it´s obvious that UMIs with the standard Smart-seq2 are useless, unless you want to sequence either only the 3´or the 5´(STRT, CEL-seq and all the variants), but then you can follow those protocols from the beginning.
I wouldn´t just stick some NNNN into my TSO or oligodT and pretend it´s working and that I can count molecules!
I'm still learning the ABCs of RNAseq analysis so pardon me if I sound ignorant but if the goal is quantitative accuracy of gene expression (to the extent that the 5' end of an mRNA faithfully reports this), adding UMIs to the TSO and following the standard Smart-seq2 protocol should work, right? I'm assuming the TSO is not messed up somehow by the extra few bases. This will of course yield more data than just 5' reads, so if one wishes to look at spliceforms, etc., that data would also be there to analyze (with obviously less quantitative accuracy). I would love to know if there's a major flaw in the UMI+Smart-seq2 approach if one wishes to simply compare gene expression among cells with greater quantitative accuracy. Thanks!
longwood is offline   Reply With Quote
Old 07-27-2015, 03:38 AM   #127
AVRL
Junior Member
 
Location: Leeds, UK

Join Date: Jul 2015
Posts: 1
Default

Quote:
Originally Posted by solidestcloud View Post
I'm not sure if any of you have seen this?http://www.fluidigm.com/home/fluidig...-6199%20A1.pdf
We have just released a Single Cell mRNAseq protocol for the C1 system. It utilises the SMARTer chemistry but miniaturises it making it much cheaper.
Has anyone here used the C1 system on methanol fixed cells? Or is it just not possible?

Thanks
Alice
AVRL is offline   Reply With Quote
Old 07-27-2015, 10:38 AM   #128
Simone78
Senior Member
 
Location: Basel (Switzerland)

Join Date: Oct 2010
Posts: 208
Default

Quote:
Originally Posted by longwood View Post
I'm still learning the ABCs of RNAseq analysis so pardon me if I sound ignorant but if the goal is quantitative accuracy of gene expression (to the extent that the 5' end of an mRNA faithfully reports this), adding UMIs to the TSO and following the standard Smart-seq2 protocol should work, right? I'm assuming the TSO is not messed up somehow by the extra few bases. This will of course yield more data than just 5' reads, so if one wishes to look at spliceforms, etc., that data would also be there to analyze (with obviously less quantitative accuracy). I would love to know if there's a major flaw in the UMI+Smart-seq2 approach if one wishes to simply compare gene expression among cells with greater quantitative accuracy. Thanks!
In principle, I don´t see any problem with adding UMIs to the Smart-seq2 oligos. However, please keep in mind that the length and the base composition of the oligos do affect the final results. there is (at least) a paper about it (http://www.ncbi.nlm.nih.gov/pubmed/24392002). Once you have designed your oligos then you can go ahead like as if you were doing the standard Smart-seq2 protocol. The problem comes with the tagmentation: as you know, you´ll be able to count only the most 5´(or 3´ if you put the UMIs on the oligodT) piece of your transcripts. The internal fragments don´t have a UMI and will be impossible to keep track of the PCR bias. Of course, the more PCR cycles you do after tagmentation (necessary if you start from picograms of cDNA for the tagmentation) the higher the bias and the more difficult to find a relation between "non-UMI fragments" and "UMI-fragments", so to speak. Or, at least, this is what I think. Please correct me if I am wrong!
/Simone
Simone78 is offline   Reply With Quote
Old 07-29-2015, 10:03 AM   #129
Kneu
Junior Member
 
Location: Chicago

Join Date: Jun 2015
Posts: 2
Default Primescript

Quote:
Originally Posted by Simone78 View Post
exactly, I used the same protocol that I generally use with the Superscript II, 90 min at 42, followed by the inactivation for 15 min at 70 degrees. The PCR is done afterwards with KAPA HiFi.
/Simone
In the recent past I successfully got single cell RNAseq data with the Smartseq2 protocol, but I was using the Superscript III RT enzyme. I was unaware of the decreased template switching capacity, and since reading these posts I am trying to change my RT enzyme for improved cDNA yield. With 5’ bio oligos, and superscript II I did see improved amplification, but I had the same contamination reported earlier (lot # 1685467). So I switched to the recommended Primescript from Clontech. Unfortunately, when my bioanalyzer results came back there was no amplification. Has anyone had recent success with this enzyme? I am trying to figure out what I could have done wrong. Briefly: I performed the oligodt annealing reaction at 72c for 3 mins, transferred to ice. Then set up the RT mix with Primescript, primescript buffer, RNase inhibitor, DTT, Betaine, MgCl2 and TSO. The RT reaction was 90min at 42c, 15min at 70c and then 4c back to ice. The only thing I changed in the Preamp PCR was to increase the ISPCR oligo to [.25um], since it now has the 5’ bio, and I performed 20, 21 and 22 preamp cycles. Even my 100 cells well did not show any amplification, and I have not had trouble with this cell population in the past. Does anyone have ideas of what could have gone wrong? Wishingfly, have you gotten results back with Primescript yet?
Thanks in advance!
Kneu is offline   Reply With Quote
Old 07-31-2015, 10:44 AM   #130
amolinaro
Junior Member
 
Location: Toronto, Canada

Join Date: Jun 2015
Posts: 3
Default

Hi all,

I recently ran a small single cell RNAseq pilot experiment using smart-seq2 & the results look very promising. The only problem is that I'm getting anywhere from 45%-85% rRNA for each cell...this makes it difficult to study population heterogeneity since, when I exclude the rRNA reads, my depth drops substantially & I'm worried that many genes have been missed. Any ideas why I'm pulling out so much rRNA even with the oligo-dT primer? Any recommendations on how to deplete the rRNA in single cells?

Thanks!
Alyssa
amolinaro is offline   Reply With Quote
Old 08-11-2015, 08:19 AM   #131
bagnall.lab
Junior Member
 
Location: US

Join Date: Jun 2015
Posts: 6
Default

Quote:
Originally Posted by Kneu View Post
In the recent past I successfully got single cell RNAseq data with the Smartseq2 protocol, but I was using the Superscript III RT enzyme. I was unaware of the decreased template switching capacity, and since reading these posts I am trying to change my RT enzyme for improved cDNA yield. With 5’ bio oligos, and superscript II I did see improved amplification, but I had the same contamination reported earlier (lot # 1685467). So I switched to the recommended Primescript from Clontech. Unfortunately, when my bioanalyzer results came back there was no amplification. Has anyone had recent success with this enzyme? I am trying to figure out what I could have done wrong. Briefly: I performed the oligodt annealing reaction at 72c for 3 mins, transferred to ice. Then set up the RT mix with Primescript, primescript buffer, RNase inhibitor, DTT, Betaine, MgCl2 and TSO. The RT reaction was 90min at 42c, 15min at 70c and then 4c back to ice. The only thing I changed in the Preamp PCR was to increase the ISPCR oligo to [.25um], since it now has the 5’ bio, and I performed 20, 21 and 22 preamp cycles. Even my 100 cells well did not show any amplification, and I have not had trouble with this cell population in the past. Does anyone have ideas of what could have gone wrong? Wishingfly, have you gotten results back with Primescript yet?
Thanks in advance!
We had zero luck with prime script for our work, NEB and SuperScript IV seem to be much better in our hands.
bagnall.lab is offline   Reply With Quote
Old 08-13-2015, 10:20 AM   #132
SunPenguin
Member
 
Location: Boston

Join Date: Aug 2015
Posts: 38
Default

Hi Simone, et al.,

I found this thread via google...and it's been a life saver. I'm pretty new to doing TS protocols, and the info here is great.

Recently, I've been trying to use the Smart-seq (Smarter and smart-seq2) on low RNA input. I'm actually trying to sequence a few particular transcripts (so I'm not using OligodT). With the low input, it would seem that I'm getting a lot of concatemer after preamp... I've read somewhere on here that it may help to reduce primer/TSO concentration, but also reducing TSO concentration would reduce cDNA yield. Anyone ever had the similar issues?
SunPenguin is offline   Reply With Quote
Old 08-14-2015, 12:30 AM   #133
Simone78
Senior Member
 
Location: Basel (Switzerland)

Join Date: Oct 2010
Posts: 208
Default

Quote:
Originally Posted by SunPenguin View Post
Hi Simone, et al.,

I found this thread via google...and it's been a life saver. I'm pretty new to doing TS protocols, and the info here is great.

Recently, I've been trying to use the Smart-seq (Smarter and smart-seq2) on low RNA input. I'm actually trying to sequence a few particular transcripts (so I'm not using OligodT). With the low input, it would seem that I'm getting a lot of concatemer after preamp... I've read somewhere on here that it may help to reduce primer/TSO concentration, but also reducing TSO concentration would reduce cDNA yield. Anyone ever had the similar issues?
Hi,
yes, I had similar issues when working with immune cells. I noticed that decreasing the TSO leads to lower yield and I wouldn´t do that, but reducing the ISPCR primer and oligodT (for the Smart-seq2 protocol) helps a bit, but it´s not the solution. Instead, use biotinylated primers (a biotin group at the 5´-end), which should prevent the formation of concatamers and primer dimers. An alternative approach would be to add 3 iso-nucleotides at the 5´end of your TSO, as described in PMID:20598146.
Good luck!
/Simone
Simone78 is offline   Reply With Quote
Old 08-14-2015, 10:59 AM   #134
longwood
Junior Member
 
Location: Massachusetts

Join Date: May 2014
Posts: 5
Default

Has anyone tried using homebrew barcodes for multiplexing? Or just ordered the oligos (as listed in the Illumina Customer Sequence Letter) from an oligo vendor instead of ordering the expensive index kits from Illumina? I would love to know what modifications are necessary and how well this approach works compared to the Illumina kit. I intend to sequence my libraries using the HiSeq 2500 platform. I need >300 barcodes and the illumnia kit costs >$3000. Would love to save some money here if possible. Thanks all!
longwood is offline   Reply With Quote
Old 08-14-2015, 02:21 PM   #135
Simone78
Senior Member
 
Location: Basel (Switzerland)

Join Date: Oct 2010
Posts: 208
Default

Quote:
Originally Posted by longwood View Post
Has anyone tried using homebrew barcodes for multiplexing? Or just ordered the oligos (as listed in the Illumina Customer Sequence Letter) from an oligo vendor instead of ordering the expensive index kits from Illumina? I would love to know what modifications are necessary and how well this approach works compared to the Illumina kit. I intend to sequence my libraries using the HiSeq 2500 platform. I need >300 barcodes and the illumnia kit costs >$3000. Would love to save some money here if possible. Thanks all!
The original Nextera kit from Epicentre was reporting the conc of the oligos. In that kit they were using the i5 + i7 pairs (0.5 uM final) and other 2, the "PPC" mix (PCR Primer cocktail, which I think Illumina still has in the Nextera kit for input up to 50 ng). The PPC had a conc 20 times higher, that is 10 uM)
Briefly I can tell you that:
- I took the oligos from the Illumina website, ordered them from another vendor and used in the same ratio 20:1. The Illumina oligos have a phosphorothioate bond between the last 2 nucleotides at the 3´end (to make them resistant to nucleases) but I think they also have some kind of blocking group at the 5´end. Mine were not blocked but they worked well anyway. However, when tagmenting picogram or sub-picogram inputs of DNA the huge excess of unused primers led to a massive accumulation of dimers that could not removed with the bead purification. I guess that was due to the fact that they were not blcoked. Result: many reads were just coming from the adaptors. A solution would be to titrate the amount of primers to your input DNA.
- If you plan to use the Nextera XT kit (that is: start from <1 ng DNA for tagmentation) you can dilute your adaptors 1:5 (at least) and you won´t see any difference. In this way the index kit becomes very affordable and you don´t have to care about dimers in your prep. If you, in parallel, also scale down the volume of your tagmentation reaction (20 ul for a Nextera XT kit is a huge waste!), the amount of index primers decreases even more. Even without liquid handling robots you can easily (easily) perform a tagmentation reaction in 2 ul (5 ul final volume after PCR). Your kit will last 20 times longer and your primers...even 100 times longer! I am currently using this strategy with the 384-index kit from Illumina, where I buy the 4 set of 96 primers each, dilute them and put them in a 384-well "index plate", ready to use on our liquid handling robot.
Simone78 is offline   Reply With Quote
Old 08-14-2015, 06:19 PM   #136
nucacidhunter
Jafar Jabbari
 
Location: Melbourne

Join Date: Jan 2013
Posts: 1,238
Default

To supplement above post, a logical look at Nextera and Nextera XT workflow indicates that XT index 1 primers (S5XX) have a biotin or other moiety at 5’ end. It serves both 5’ end blocking and normalisation by binding to limited number of streptavidin (or other depending on moiety) coated beads to normalise the DNA mass. Concentration of S5XX, N5XX and N7XX are similar when run on small RNA Chip. Nextera XT PCR is done with S5 and N7 only but to compensate for 50x more DNA input in Nextera they add PPC oligos which are complement of flow cell binding motives on adapters. This way the full adapter sequences are restored on tagmented fragments by N5 (S5), N7 primers and amplification mostly is done by PPC primers.
nucacidhunter is offline   Reply With Quote
Old 08-17-2015, 08:43 AM   #137
longwood
Junior Member
 
Location: Massachusetts

Join Date: May 2014
Posts: 5
Default

Extremely helpful information Simone78 and nucacidhunter! Thank you very much!
longwood is offline   Reply With Quote
Old 08-17-2015, 01:29 PM   #138
SunPenguin
Member
 
Location: Boston

Join Date: Aug 2015
Posts: 38
Default

Quote:
Originally Posted by Simone78 View Post
Hi,
yes, I had similar issues when working with immune cells. I noticed that decreasing the TSO leads to lower yield and I wouldn´t do that, but reducing the ISPCR primer and oligodT (for the Smart-seq2 protocol) helps a bit, but it´s not the solution. Instead, use biotinylated primers (a biotin group at the 5´-end), which should prevent the formation of concatamers and primer dimers. An alternative approach would be to add 3 iso-nucleotides at the 5´end of your TSO, as described in PMID:20598146.
Good luck!
/Simone
Thank you so much! I ended up ordering ones with biotin, since that seems the most straightforward (without changing length of the primer, etc.)

In your experience, have you also seen whether time of elongation during RT also matter? Some people in my lab have extended the time at 42 to 3 hours...which seems could potentially result in more concatemers or weird priming.
SunPenguin is offline   Reply With Quote
Old 08-17-2015, 10:58 PM   #139
Simone78
Senior Member
 
Location: Basel (Switzerland)

Join Date: Oct 2010
Posts: 208
Default

Quote:
Originally Posted by SunPenguin View Post
Thank you so much! I ended up ordering ones with biotin, since that seems the most straightforward (without changing length of the primer, etc.)

In your experience, have you also seen whether time of elongation during RT also matter? Some people in my lab have extended the time at 42 to 3 hours...which seems could potentially result in more concatemers or weird priming.
Actually not. I tried the opposite, to reduce time for the RT to 15 min, the time that is now recommended for the new Superscript IV (and because I´m tired of waiting 1.5 hours!). Result: the yield was lower and the size slightly lower but not so much, considering is a 80% reduction. However, in my case, concatamers were not visible in any case. You could try and see if this makes things better.
/Simone
Simone78 is offline   Reply With Quote
Old 08-18-2015, 11:44 AM   #140
eab
Member
 
Location: Maryland

Join Date: May 2011
Posts: 66
Default Superscript II concerns

We would like to know anyone thinks our single-cell transcriptome results may be questionable due to the issue with superscript II that several people have raised. We've been using at least one of the SSII lots that have been questioned. I've attached cDNA traces (after Kapa amp). We are seeing sequence mapping percentages of 60-70% (for highly activated human lymphocytes) and 40-50% (for resting memory human lymphocytes).

Do people think there's likely to be a problem in our data? Clearly we're generating a bit of product that doesn't depend on templates coming from cells (see the "no cell" controls at right in the slide). Is there too much of that stuff? What do people think of mapping percentages 40-50% in the data?

Thanks very much!
Eli
Attached Files
File Type: pdf SSIItestcDNA.pdf (2.83 MB, 132 views)
eab is offline   Reply With Quote
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off




All times are GMT -8. The time now is 06:21 AM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2021, vBulletin Solutions, Inc.
Single Sign On provided by vBSSO