SEQanswers

Go Back   SEQanswers > Applications Forums > Sample Prep / Library Generation



Similar Threads
Thread Thread Starter Forum Replies Last Post
RNA-Seq: HITS-CLIP: panoramic views of protein-RNA regulation in living cells. Newsbot! Literature Watch 1 09-10-2015 11:48 PM
RNA-Seq: RNA-sequence analysis of human B-cells. Newsbot! Literature Watch 0 05-04-2011 02:50 AM
RNA-Seq: RNA-Seq Analysis of Sulfur-Deprived Chlamydomonas Cells Reveals Aspects of A Newsbot! Literature Watch 0 07-01-2010 02:40 AM
RNA-Seq: Transcriptome and targetome analysis in MIR155 expressing cells using RNA-se Newsbot! Literature Watch 0 06-30-2010 02:00 AM
RNA seq libraries from few hundred cells artem.men Sample Prep / Library Generation 2 03-02-2010 04:04 PM

Reply
 
Thread Tools
Old 06-04-2014, 10:37 AM   #21
jwfoley
Senior Member
 
Location: Stanford

Join Date: Jun 2009
Posts: 171
Default

Quote:
We use a LNA-containing TSO and always have a better coverage at the 5ī-end of transcripts than the Clontech kit which uses 3 riboG.
Not a fair comparison since your protocol has several other optimizations relative to the Clontech kit (or just its legacy version, I assume).

Quote:
The unblocked LNA-containing TSO was still performing better, therefore we donīt block the TSO unless the "hedgehog" signal shows up.
Is it realistic to buy two versions of the oligo, and risk losing precious samples, just to squeeze out a slight performance gain? I would rather stick to the most robust version of the protocol.
jwfoley is offline   Reply With Quote
Old 06-07-2014, 01:08 AM   #22
Simone78
Senior Member
 
Location: Sweden

Join Date: Oct 2010
Posts: 173
Default

Quote:
Originally Posted by jwfoley View Post
Not a fair comparison since your protocol has several other optimizations relative to the Clontech kit (or just its legacy version, I assume).
Youīre right. But if you check in the Suppl Info of our Nat Methods paper you will see different comparisons. We singled out the effect of the polymerase and the TSO and showed, among other things, that LNA-based TSO is better than rG3-based TSO, all the other factors IN THE SMART-SEQ2 PROTOCOL kept the same.

Quote:
Originally Posted by jwfoley View Post
Is it realistic to buy two versions of the oligo, and risk losing precious samples, just to squeeze out a slight performance gain? I would rather stick to the most robust version of the protocol.
These days we FACS-sort most of our cells. Every cell type requires few adjustment for optimal results (amount of PCR primers, number of cycles, etc). We always have to test few of them anyway once we start a new project. Only if the TSO creates concatamers we switch to the iso-TSO. The most robust version of the protocol is still the one with the unmodified version of the LNA-TSO (at least for us).
Simone78 is offline   Reply With Quote
Old 06-07-2014, 03:16 PM   #23
jwfoley
Senior Member
 
Location: Stanford

Join Date: Jun 2009
Posts: 171
Default

Quote:
We singled out the effect of the polymerase and the TSO and showed, among other things, that LNA-based TSO is better than rG3-based TSO, all the other factors IN THE SMART-SEQ2 PROTOCOL kept the same.
If I'm reading this right (doi:10.1038/nmeth.2639, supplementary table 1, sheet B, rows 34 and 35) your LNA TSO yielded 7% shorter cDNAs than your pure RNA TSO, and 11% shorter than the SMARTer TSO, on average. This is consistent with my suspicion that the gain in yield is due in part to the TSO's mispriming in the middle of the cDNA by strand invasion.

Quote:
These days we FACS-sort most of our cells. Every cell type requires few adjustment for optimal results (amount of PCR primers, number of cycles, etc). We always have to test few of them anyway once we start a new project. Only if the TSO creates concatamers we switch to the iso-TSO. The most robust version of the protocol is still the one with the unmodified version of the LNA-TSO (at least for us).
I guess we have different ideas of what robustness should be. To me, it means any inexperienced lab can perform the protocol as written and see it work well the first time. Reoptimizing every time, fallback versions, etc. are okay for large service centers or labs that do the same protocol a lot, but they discourage wide adoption of the method.

Also, you mention using iso nucleotides as your 5' blocker. Have you done the test using biotin instead? That seems more common in the literature, and is a lot cheaper (plus it's difficult to get LNAs and iso bases from the same company). In my experience it works just fine to eliminate the "hedgehog".
jwfoley is offline   Reply With Quote
Old 06-08-2014, 01:59 AM   #24
Simone78
Senior Member
 
Location: Sweden

Join Date: Oct 2010
Posts: 173
Default

Quote:
Originally Posted by jwfoley View Post
If I'm reading this right (doi:10.1038/nmeth.2639, supplementary table 1, sheet B, rows 34 and 35) your LNA TSO yielded 7% shorter cDNAs than your pure RNA TSO, and 11% shorter than the SMARTer TSO, on average. This is consistent with my suspicion that the gain in yield is due in part to the TSO's mispriming in the middle of the cDNA by strand invasion.
A possibility that canīt be excluded and a phenomenon that is happening for sure to some extent. I already said the protocol is not perfect...
What we wanted to show with that table as well as Suppl Figure 1, 3 and 5 (among others) is the synergic effect of betaine, MgCl2 and LNA-TSO. LNA-TSO was chosen because it improved the efficiency of the strand switch reaction, a limintations when using rG3. Undesired phenomenons are possible, though.

Quote:
Originally Posted by jwfoley View Post
I guess we have different ideas of what robustness should be. To me, it means any inexperienced lab can perform the protocol as written and see it work well the first time. Reoptimizing every time, fallback versions, etc. are okay for large service centers or labs that do the same protocol a lot, but they discourage wide adoption of the method.
Yes, I guess we have different ideas. If you would have tried the protocol yourself instead of just looking for mistakes or inaccuracies you would have realized how robust the protocol is. I didnīt talk about a complete makeover of the protocol each time you start with a new cell type. I talked about few adjustments which wonīt take more than half a day. On the other hand, cells are different in terms of RNA content (to mention one factor) which requires a different number of cycles and amount of primers in order to avoid over/under-amplification, excess of primer dimers etc. I guess that people out there are able to do this without following a protocol (but we also mention it, if you would have read the Nat Protocols paper you would have noticed (step 14). To my knowledge, many groups have already switched to Smart-seq2, got good results and are happy with that. Verba volant scripta manent , to say it in Latin.

Quote:
Originally Posted by jwfoley View Post
EDIT: on further inspection, those bands are definitely the characteristic "hedgehog" signal from using unblocked oligos. Block them and it should be fine. I don't know why there are any protocols floating around out there that say blocking is optional; it's so cheap to fix.
Quote:
Originally Posted by jwfoley View Post
Also, you mention using iso nucleotides as your 5' blocker. Have you done the test using biotin instead? That seems more common in the literature, and is a lot cheaper (plus it's difficult to get LNAs and iso bases from the same company). In my experience it works just fine to eliminate the "hedgehog".
Quote:
Originally Posted by jwfoley View Post
Is it realistic to buy two versions of the oligo, and risk losing precious samples, just to squeeze out a slight performance gain? I would rather stick to the most robust version of the protocol.
Not really clear to me what you suggest...Do you want to block it or not? The "most robust version" according to what you say is the 3rG which is not blocked...but then you suggest biotin as blocking group...Iīm a bit confused! Besides, biotin as well as 2-OMe group, phosphates, etc donīt make things better, at least in our hand...I would be really interested in seeing biotin working as a blocking group in this context, if you have results showing that...
Best,
Simone
Simone78 is offline   Reply With Quote
Old 06-08-2014, 05:24 AM   #25
jwfoley
Senior Member
 
Location: Stanford

Join Date: Jun 2009
Posts: 171
Default

Quote:
If you would have tried the protocol yourself instead of just looking for mistakes or inaccuracies you would have realized how robust the protocol is.
I did try it, and I spent a couple of weeks testing various things before I ran it on the Bioanalyzer. Then I saw the hedgehog pattern, and realized I had wasted all that time optimizing my yield of concatamers, so I ordered biotin-blocked oligos, waited another week for them to arrive, and moved on.

Quote:
Not really clear to me what you suggest...Do you want to block it or not? The "most robust version" according to what you say is the 3rG which is not blocked...but then you suggest biotin as blocking group...Iīm a bit confused! Besides, biotin as well as 2-OMe group, phosphates, etc donīt make things better, at least in our hand...I would be really interested in seeing biotin working as a blocking group in this context, if you have results showing that...
Sorry if that was confusing. Yes, I always use a blocking group now, but I use biotin instead of unnatural bases because it is much cheaper. I was getting the "hedgehog" fairly consistently without the biotin, and I have never seen it with the biotin, though so far I've only run a few dozen samples of either type on the Bioanalyzer.

Here is the test that convinced me not to do any more preps with unblocked oligos:

Unfortunately we had a bad lot of Bioanalyzer reagents (this seems to be a common problem), so the markers ran too slowly in the first wells, but you can clearly see a hedgehog pattern in every sample where I used unblocked primers. Post-PCR, those peaks totally overload the Bioanalyzer. Even the water (no template) control doesn't have a hedgehog when I use biotinylated oligos - at least not before PCR.

Last edited by jwfoley; 06-08-2014 at 05:26 AM.
jwfoley is offline   Reply With Quote
Old 06-08-2014, 05:59 AM   #26
Simone78
Senior Member
 
Location: Sweden

Join Date: Oct 2010
Posts: 173
Default

I think we are talking about 2 different things.
What you have in your Bioanalyzer when you use unblocked oligos looks to me like primer dimers (and not longer concatamers) which are a problem anyway and itīs interesting to see it works for you. We also have such a problem, but usually the final results are not affected. Yes, this "short stuff" gets tagmented by the tn5 and we waste reads sequencing it but itīs never too bad.
Since we also make our own Ampure beads I just changed the % of PEG in the bead buffer, because the % of PEG directly correlates with the beads cutoff. In this way we reduce even more the leftover dimers in the final library.
Anyway, what I mean when I talk about "concatamers" is a profile that looks more like the one I show here. Sample 5 and 6 have a "moderate" amount of concatamers, while samples 9 and 10 are much worse.

Btw, your post-PCR with biotinylated samples arenīt looking too good. You have a lot of short stuff between 60 and 90 sec that shouldnīt be there (premature termination of RT? degradation?). Have you sequenced any single cell using this protocol? I guess these libraries have a strong 3ī-bias.
Simone78 is offline   Reply With Quote
Old 06-08-2014, 02:41 PM   #27
jwfoley
Senior Member
 
Location: Stanford

Join Date: Jun 2009
Posts: 171
Default

Quote:
What you have in your Bioanalyzer when you use unblocked oligos looks to me like primer dimers (and not longer concatamers)
The smallest peak, which I also get from the blocked oligos, is primer dimer. Larger peaks must be concatamers (primer multimers), since I see them even when I do the TS-RT with no RNA template. You can call them whatever you want, but they contain no useful cDNA sequence and they get above the size range that I can safely size-select out (and at very high concentrations, apparently), so I strongly prefer the blocked oligos to keep it clean. Maybe you get the wider range of concatamer sizes because you're using the LNA TSO?

Quote:
You have a lot of short stuff between 60 and 90 sec that shouldnīt be there (premature termination of RT? degradation?). Have you sequenced any single cell using this protocol? I guess these libraries have a strong 3ī-bias.
These particular samples contained fragmented RNA, so that's expected.

No, I haven't brought it down to single-cell amounts of input yet. But if I do, I may not have the choice to test one batch of expensive oligos to see if it works, and then switch to a different batch of expensive oligos if it doesn't, on the same sample. So I'm only using blocked oligos for every sample.

Last edited by jwfoley; 06-08-2014 at 02:45 PM.
jwfoley is offline   Reply With Quote
Old 06-09-2014, 08:57 AM   #28
Simone78
Senior Member
 
Location: Sweden

Join Date: Oct 2010
Posts: 173
Default

Quote:
Originally Posted by jwfoley View Post
No, I haven't brought it down to single-cell amounts of input yet. But if I do, I may not have the choice to test one batch of expensive oligos to see if it works, and then switch to a different batch of expensive oligos if it doesn't, on the same sample. So I'm only using blocked oligos for every sample.
the cost/sample of the "batch of expensive oligos" is negligible...you donīt have to order buckets of it...
it would be interesting to see the gene body coverage of your TSO compares with the kit and with a "batch of expensive oligo".
Since we made the protocol so cheap I think Iīll stick to a batch of expensive oligos for now, at least until you prove the rest of the world wrong...good luck with the tests!
Simone78 is offline   Reply With Quote
Old 06-09-2014, 10:23 AM   #29
jwfoley
Senior Member
 
Location: Stanford

Join Date: Jun 2009
Posts: 171
Default

Quote:
the cost/sample of the "batch of expensive oligos" is negligible...you donīt have to order buckets of it...
Oh, maybe that's why we see this differently. With my supplier, the smallest possible order of the TSO is about 50 micrograms, for about USD 500. Obviously that's enough for hundreds or thousands of libraries, but if you only want to make a dozen, it's a pretty big commitment, especially when the rest of the reagents are off-the-shelf and/or can be bought in much smaller effective amounts. So doubling the amount of TSO you must purchase, just to try a risky way of getting a slight yield increase that may require falling back to the foolproof way anyhow, is rather unattractive if you only plan to do one experiment rather than start a service center. Otherwise you might just buy the kit.

Quote:
at least until you prove the rest of the world wrong
I don't know what the rest of the world does, but the STRT-seq folks aren't even outside your institution: "Critical: This oligo must be 5′ biotinylated to prevent template switching at its 5′ end (which would generate useless background reads)." (doi:10.1038/nprot.2012.022) At any rate, your own protocol's note that unblocked oligos sometimes result in concatamers is more than enough to persuade me not to waste any time on them.
jwfoley is offline   Reply With Quote
Old 06-09-2014, 11:20 PM   #30
Simone78
Senior Member
 
Location: Sweden

Join Date: Oct 2010
Posts: 173
Default

Quote:
Originally Posted by jwfoley View Post
I don't know what the rest of the world does, but the STRT-seq folks aren't even outside your institution: "Critical: This oligo must be 5′ biotinylated to prevent template switching at its 5′ end (which would generate useless background reads)." (doi:10.1038/nprot.2012.022) At any rate, your own protocol's note that unblocked oligos sometimes result in concatamers is more than enough to persuade me not to waste any time on them.
I suggest you reading both papers from the Linnarsson group once again (the Genome Research 2011 and the Nature Protocols 2012). Neither of them has a biotinylated TSO, donīt know where you read it...
In the Nat Prot paper the biotinylation refers to the oligodT primer (STRT-V3-T30VN oligo), while in the Genome Research paper they biotinylated both the STRT-V3-T30VN and the STRT-PCR primer (to capture cDNA later on). It obviously works for them but it was less successful for us (because of the different TSO? I donīt know). Thatīs why I was trying to block the TSO instead.
Simone78 is offline   Reply With Quote
Old 06-14-2014, 07:19 AM   #31
wacguy
Member
 
Location: NC

Join Date: Sep 2012
Posts: 24
Default

Hi Simone,

I've been using your Smart-seq2 protocol to construct over 20 libraries in the last couple of months w/ great results (Arabidopsois root tissue; around few dozens to few hundreds cells, starting w/ ~30pg-3ng total RNA). This week it stopped working. The problem is somewhere in one of the reactions b4 Tagmentation. There is almost no trace for the ~2000bps peak in the BioAnalyzer after pre-amplification reaction. I know this is one of the weird things we, biologist experience and there is usually no straight forward answer but maybe you have any suggestion or noticed that one of the materials is highly sensitive (e.g., freeze and thaw of the KAPA enzyme).

Besides that, less critical issues, what is the % of mapped reads you get and do you get a lot of rRNA reads (I do). Why did you include the VN nts at the end of the poly(dT) primers and did you try primers that do not have these nts.

Another question is why not using lower Ampure beads ratio after the KAPA reaction same as after the XT reaction to get rid of all oligos that are <200bps.

BTW, I've tried to add more of the poly(dT) primers and it seems that less rRNA is present in the final reads, but as I said, currently, nothing works.

Thanks and have a nice weekend,
Guy
wacguy is offline   Reply With Quote
Old 06-14-2014, 01:34 PM   #32
Simone78
Senior Member
 
Location: Sweden

Join Date: Oct 2010
Posts: 173
Default

Quote:
Originally Posted by wacguy View Post
Hi Simone,

I've been using your Smart-seq2 protocol to construct over 20 libraries in the last couple of months w/ great results (Arabidopsois root tissue; around few dozens to few hundreds cells, starting w/ ~30pg-3ng total RNA). This week it stopped working. The problem is somewhere in one of the reactions b4 Tagmentation. There is almost no trace for the ~2000bps peak in the BioAnalyzer after pre-amplification reaction. I know this is one of the weird things we, biologist experience and there is usually no straight forward answer but maybe you have any suggestion or noticed that one of the materials is highly sensitive (e.g., freeze and thaw of the KAPA enzyme).
Hi Guy,
Difficult to say why itīs not working. KAPA Pol is not that sensitive. In fact, on their website they say you could even leave it at RT over the weekend and itīll still work (but donīt do it!). The other reagents are not sensitive either. Maybe your RNAse inhibitor has gone bad or you started using a new batch of primers that are not good (problems in the synthesis)? My only suggestion is to replace ALL the reagents and start over, which I know is a bit annoying.

Quote:
Originally Posted by wacguy View Post
Besides that, less critical issues, what is the % of mapped reads you get and do you get a lot of rRNA reads (I do). Why did you include the VN nts at the end of the poly(dT) primers and did you try primers that do not have these nts.
We usually get 60% uniquely mapping reads, 20% multi-mapping and 20% unmappable. We keep only the uniquely mapping of course and, of those, we have 60% reads mapping to exons, 20% intronic and 20% intergenic. rRNA is never an issue, always <5%, usually around 1-2%.
We include the "VN" to be sure that the oligo dT will anneal at the beginning of the poly-A tail, thus avoiding an unnecessarily long stretch of A that might cause problems to the reverse transcriptase and, later, to the KAPA Pol.


Quote:
Originally Posted by wacguy View Post
Another question is why not using lower Ampure beads ratio after the KAPA reaction same as after the XT reaction to get rid of all oligos that are <200bps.
Initially that was our intention, but we noticed that we were losing also some of the long fragments along with the short. We are now making our own beads and we change the buffer composition according to the size of fragments we want to recover but itīs never perfect anyway. A solution would be to block the primers, of course. I did limited trials on that but I think that is the only thing that might solve the problem.
Best,
Simone
Simone78 is offline   Reply With Quote
Old 06-14-2014, 04:52 PM   #33
wacguy
Member
 
Location: NC

Join Date: Sep 2012
Posts: 24
Default Thank you Simone!

I appreciate your reply, changing EVERYTHING, and checking the PCR machine was and is my next plan.

Best,
Guy

Last edited by wacguy; 06-15-2014 at 06:11 AM.
wacguy is offline   Reply With Quote
Old 06-30-2014, 04:54 PM   #34
konglongjidan
Junior Member
 
Location: CA

Join Date: Dec 2011
Posts: 2
Default

for TSO, is HPLC purification necessary for this protocol? and could you recommend any reliable vendors in US for locked nucleic acid. We usually order oligos from IDT, but they don't provide service for LNA. Thanks!
konglongjidan is offline   Reply With Quote
Old 06-30-2014, 05:37 PM   #35
jwfoley
Senior Member
 
Location: Stanford

Join Date: Jun 2009
Posts: 171
Default

Exiqon is the only LNA vendor in North America... though I believe they actually get their orders produced by IDT anyway.
jwfoley is offline   Reply With Quote
Old 06-30-2014, 06:12 PM   #36
wacguy
Member
 
Location: NC

Join Date: Sep 2012
Posts: 24
Default

Hi konglongjidan,

I used the same companies mentioned in the SMART2-seq article (Biomers & Exiqon). I work at Duke so I guess you would be able to do the same. They were all HPLC purified, I wouldn't change that.

Good luck,
Guy
wacguy is offline   Reply With Quote
Old 07-04-2014, 12:24 AM   #37
linampli
Junior Member
 
Location: france

Join Date: Mar 2012
Posts: 7
Default

thank you guys for an exciting and very informative exchange of ideas and protocols. smartseq2 with LNA seems to work better than 3rG as mentioned in the paper, at least in my hands. just for information, i get my LNA TSO synthesized by eurogentec (in europe) at 40nmol scale, the yield was 95 micrograms and it cost me 30 euros. so it is not expensive at all.
linampli is offline   Reply With Quote
Old 07-09-2014, 06:15 AM   #38
eab
Member
 
Location: Maryland

Join Date: May 2011
Posts: 63
Default

Simone,
Really appreciate all your advice on this forum.
How much have you played with the concentration of betaine during the RT, or investigated other duplex destabilizers?
Many thanks for any feedback.
Eli
eab is offline   Reply With Quote
Old 07-09-2014, 06:18 AM   #39
eab
Member
 
Location: Maryland

Join Date: May 2011
Posts: 63
Default

Also, if one were to leave out the betaine, you would predict a lower yield of library, and bias in the data due to selective loss of reads from transcripts with strong secondary structures?
eab is offline   Reply With Quote
Old 07-10-2014, 02:20 AM   #40
Simone78
Senior Member
 
Location: Sweden

Join Date: Oct 2010
Posts: 173
Default

I tried 0.5, 1, 1.5 and 2M betaine (all final conc). While the cDNA yield after preampl was higher for 1M when compared to 0.5M, no further improvements were observed when using more than 1M. Additionally, the highest conc of betaine solution you can buy (or make) is 5M, which means that already 1/5 of your reaction volume is already taken when using 1M. Adding >1M becomes impractical.
As we show in the Nat. Method paper, betaine alone is not sufficient but somehow it works only when combined with 6 mM or higher MgCl2 (9-12 mM to maximize yield). I tried betaine in PCR as well (or only in the PCR but no RT) but results were not great and I dropped it. We donīt know if leaving out betaine will cause of lower detection of genes with strong secondary structure, but we do know that we have a worse coverage at the 5ī-end of the genes, which might be an indirect proof that we do lose highly structured genes.
As alternatives to betaine I tried trehalose (the second best), sorbitol (bad), DMSO (bad) and formamide (bad). For experiments with betaine, trehalose and sorbitol on tot RNA please see Suppl Table 1 in NM paper.
Best,
Simone
Simone78 is offline   Reply With Quote
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off




All times are GMT -8. The time now is 05:17 PM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2017, vBulletin Solutions, Inc.
Single Sign On provided by vBSSO