SEQanswers

Go Back   SEQanswers > Sequencing Technologies/Companies > 454 Pyrosequencing



Similar Threads
Thread Thread Starter Forum Replies Last Post
RAPID Libraries adaptor+key sequences dottomarco 454 Pyrosequencing 15 01-14-2021 12:56 AM
Covaris and 454 rapid libraries TonyBrooks Sample Prep / Library Generation 2 04-18-2012 11:21 AM
Compatibility between standard titanium and rapid libraries Coyk 454 Pyrosequencing 2 11-05-2011 09:13 AM
Rapid Library Prep 454 HELP, HELP, HELP!!! Giancarlo Sample Prep / Library Generation 0 09-01-2011 09:34 AM
SureSelect and Rapid MID Libraries DoubleA 454 Pyrosequencing 2 09-15-2010 03:05 PM

Reply
 
Thread Tools
Old 03-04-2010, 09:03 AM   #21
LMcSeq
Member
 
Location: Maryland--we have a HiSeq too!

Join Date: Feb 2009
Posts: 33
Default

That's what I suspect is the problem, but I'm not sure how to effectively troubleshoot it with the new rapid protocol since the chip isn't run until the end of all of the steps are complete. I don't want to have to make libraries over and over...
I'm not sure if I should do a calibration with my XP beads and check the avg. frag length with different Covaris shearing times (don't know if it translates to the new procedure with the sizing solution and standard XP volume).
Any suggestions?
LMcSeq is offline   Reply With Quote
Old 03-04-2010, 09:41 AM   #22
pmiguel
Senior Member
 
Location: Purdue University, West Lafayette, Indiana

Join Date: Aug 2008
Posts: 2,318
Default

Quote:
Originally Posted by LMcSeq View Post
That's what I suspect is the problem, but I'm not sure how to effectively troubleshoot it with the new rapid protocol since the chip isn't run until the end of all of the steps are complete. I don't want to have to make libraries over and over...
I'm not sure if I should do a calibration with my XP beads and check the avg. frag length with different Covaris shearing times (don't know if it translates to the new procedure with the sizing solution and standard XP volume).
Any suggestions?
We would just size fractionate with an agarose gel if we got a library too much outside the size range. I realize that is not optimum for large numbers of samples, though.

Also, why not use nebulization instead of sonication? I believe nebulized DNA would have a higher fraction of ligatable ends after end-repair than sonicated DNA.

--
Phillip
pmiguel is offline   Reply With Quote
Old 03-04-2010, 12:03 PM   #23
kmcarr
Senior Member
 
Location: USA, Midwest

Join Date: May 2008
Posts: 1,178
Default

Quote:
Originally Posted by pmiguel View Post
Okay, but if short primer is in spec, that means you do not have a large number of reads that traverse the insert completely. So preferential amplification of your small templates is not the issue.
Phillip, reads will only be reported as short primer if they are rejected because they are too short (≤ 84 flows, ~50nt) after trimming off the B-adapter sequence. If an insert is 250nt long the raw read will likely traverse into the adapter which will be trimmed from the final read output. This will not be included in the shortPrimer filter count, that only reports reads which are rejected, not simply trimmed.
kmcarr is offline   Reply With Quote
Old 03-05-2010, 01:35 AM   #24
seqAll
Member
 
Location: China

Join Date: Nov 2009
Posts: 21
Default

Quote:
Originally Posted by LMcSeq View Post
Hi all,
Quick question: I'm finding the my average frag length on the high sensitivity chip at the end of the rapid protocol is on the larger side. My sequencer results have had low average fragment lengths. I'm wondering if the larger frags aren't being amplified because they're too big for the microreactors, causing preferential amplification of smaller frags. I'm using the Covaris settings recommended for 500bp average. Anyone else having any similar issue?
Hi LMcSeq,

Do you have leftover enrichment beads? If so, you can run normal PCR with a few of these beads as template, using the amplification primers, for a few cycles (with the same cylcling condition as emPCR, and maybe one or two cycles, so not to skew the size distribution). Then run on a gel to check the size distribution.

Next time, you can save the supernatant after melting (the supernatant are single-stranded templates released off the beads), and run them on a gel as a control step before you proceed to sequencing.

It would be interesting to see if it is the sequencing part (or maybe the algorithm) that can not take such long fragments.

Last edited by seqAll; 03-05-2010 at 01:47 AM.
seqAll is offline   Reply With Quote
Old 03-05-2010, 05:01 AM   #25
pmiguel
Senior Member
 
Location: Purdue University, West Lafayette, Indiana

Join Date: Aug 2008
Posts: 2,318
Default

Quote:
Originally Posted by kmcarr View Post
Phillip, reads will only be reported as short primer if they are rejected because they are too short (≤ 84 flows, ~50nt) after trimming off the B-adapter sequence. If an insert is 250nt long the raw read will likely traverse into the adapter which will be trimmed from the final read output. This will not be included in the shortPrimer filter count, that only reports reads which are rejected, not simply trimmed.
Wow, thanks for that information. Seems absolutely crazy for Roche not to provide a metric or graph that shows the length of library molecules for which a B-adapter sequence was located. With that metric you can immediately see many library issues. Without it you can waste your time looking for instrument problems.

I have been trying to convince Applied Biosystems of the same thing.
--
Phillip
pmiguel is offline   Reply With Quote
Old 03-05-2010, 05:16 AM   #26
kmcarr
Senior Member
 
Location: USA, Midwest

Join Date: May 2008
Posts: 1,178
Default

Quote:
Originally Posted by pmiguel View Post
Wow, thanks for that information. Seems absolutely crazy for Roche not to provide a metric or graph that shows the length of library molecules for which a B-adapter sequence was located. With that metric you can immediately see many library issues. Without it you can waste your time looking for instrument problems.
Yes, it certainly would be nice. To check for B-adapter I output the untrimmed FASTA from the SFF file (sffinfo -s -n) and then search the reads against the B-adapter sequence with cross_match or fuzznuc (or your favorite sequence search tool).
kmcarr is offline   Reply With Quote
Old 03-05-2010, 05:48 AM   #27
pmiguel
Senior Member
 
Location: Purdue University, West Lafayette, Indiana

Join Date: Aug 2008
Posts: 2,318
Default

Quote:
Originally Posted by kmcarr View Post
Yes, it certainly would be nice. To check for B-adapter I output the untrimmed FASTA from the SFF file (sffinfo -s -n) and then search the reads against the B-adapter sequence with cross_match or fuzznuc (or your favorite sequence search tool).
Yes.
As it turns out cross_match and fuzznuc are my favorite sequence search tools.
Have you tried to see long (read-killing) homopolymer runs this way? Or do they not show up in even the untrimmed sequence?

--
Phillip
pmiguel is offline   Reply With Quote
Old 03-12-2010, 10:39 AM   #28
bia
Junior Member
 
Location: Italy

Join Date: Nov 2008
Posts: 5
Default

Hi everybody,
does anyone experienced emPCR inhibition problems when using shotgun rapid libraries?
Thanks,
bia
bia is offline   Reply With Quote
Old 03-16-2010, 06:23 AM   #29
Cambridge454
Junior Member
 
Location: Cambridge UK

Join Date: Nov 2009
Posts: 7
Default Rapid Library length problems

I am also having problems with the Rapid Library producing really long average fragment lengths >900bp. I use the recommended nebulization settings, which worked perfectly well for old version Shotgun libraries (600bp avg.). I have used the new vented caps and the rubber stopper/filter nebulizer set-up with very similar results. Should I increase the nitrogen pressure to say 2.5 bar instead of 2.1 bar?
Cambridge454 is offline   Reply With Quote
Old 03-16-2010, 06:52 AM   #30
LMcSeq
Member
 
Location: Maryland--we have a HiSeq too!

Join Date: Feb 2009
Posts: 33
Default

Cambridge454--I get long frag lengths with using the Covaris as well. If you adjust the nebulization, I would add more time rather than change the pressure. Try doing a time course study (that's what I did with the Covaris). Shear at the same pressure for different amts of time and assess on the Bioanalyzer. Unfortunately, I think the longer fragment lengths are just part of the rapid library process. I'm just hoping it doesn't cause preferential amplification of smaller frags since the larger ones may not fit into the microreactor bubbles during emPCR.
LMcSeq is offline   Reply With Quote
Old 03-16-2010, 07:42 AM   #31
pmiguel
Senior Member
 
Location: Purdue University, West Lafayette, Indiana

Join Date: Aug 2008
Posts: 2,318
Default

Quote:
Originally Posted by Cambridge454 View Post
I am also having problems with the Rapid Library producing really long average fragment lengths >900bp. I use the recommended nebulization settings, which worked perfectly well for old version Shotgun libraries (600bp avg.). I have used the new vented caps and the rubber stopper/filter nebulizer set-up with very similar results. Should I increase the nitrogen pressure to say 2.5 bar instead of 2.1 bar?
I would check your fragmentation sizes prior to ligation. Downstream changes in library construction should not affect that. If it does, then there has been some change in your nebulization set up that is at fault.

Err -- just to make sure this isn't something trivial: what are you measuring your fragment lengths with? If you are using a nano or picoRNA bioanalyzer chip (like one used to do with the old method) then keep in mind that your library is double stranded now and will appear to be roughly twice as "long" on chips intended to assay single stranded molecules. (I don't think the denaturation step prior to loading a nano or picoRNA chip would be sufficient to denature 600 bp fragments of DNA.)

--
Phillip
pmiguel is offline   Reply With Quote
Old 03-16-2010, 12:19 PM   #32
Old guy
Member
 
Location: Lincoln, Nebraska

Join Date: Feb 2010
Posts: 12
Default

I have made 10 libraries with the rapid kit. All came out very well based on the quantification and HS Bioanalyzer results. When the first two were amplified with emPCR the results were poor. Low bead count. Anybody else experiencing this? Any suggestions?
Old guy is offline   Reply With Quote
Old 03-16-2010, 12:58 PM   #33
pmiguel
Senior Member
 
Location: Purdue University, West Lafayette, Indiana

Join Date: Aug 2008
Posts: 2,318
Default

Quote:
Originally Posted by Old guy View Post
I have made 10 libraries with the rapid kit. All came out very well based on the quantification and HS Bioanalyzer results. When the first two were amplified with emPCR the results were poor. Low bead count. Anybody else experiencing this? Any suggestions?

We don't trust the fluorescence assay. I mean, even in principle it does not appear like it would be diagnostic. That is, say your library consisted of fragments with adaptor ligated only to one end. They will not amplify at all, but on the fluorimeter the library may look fine. This is a reasonable example because if end repair did not work well, then this is exactly the situation you will end up in.

For example imagine a library constructed with the following characteristic:

10% of the fragment ends are ligatable. The resulting library molecules will partition as follows:

90% No adaptor
9% One adaptor
1% Two adaptors

Only the two adaptor molecules will amplify. But there are 9x more one adaptor molecules that will fluoresce 50% as intensely as the two adaptor molecules but they will not amplify. So you will end up putting roughly 20% of the library into your emPCR than what you need if you trust the fluorescence.

We do qPCR and then do emPCR based on that. qPCR takes longer but at least it should specifically detect amplicons.

Generally you need ~5-20% recovery upon emPCR enrichment to have enough beads to run but not a high percentage of mixed beads. So there is some leeway there. Still we end up too high or too low as much as 50% of the time. That will probably improve with experience, but our end results are good now.

The Roche protocols tend to have a lot of fault tolerance built into them, but sometimes they will include a few steps that are designed by bench magicians for bench magicians. Keeps you head from swelling too much, I guess...

--
Phillip
pmiguel is offline   Reply With Quote
Old 03-16-2010, 01:58 PM   #34
SeqMonster
Member
 
Location: california

Join Date: Jul 2009
Posts: 21
Unhappy New problem - 100% enrichment

Hi guys, I just started using the rapid kit not long ago and did 2 runs from those libraries. The first one turned out to have >1million reads but with a lot of short reads and the second one has very high mixed&dots even though the enrichment is around 9%. Now, I am doing the third one and some weird thing happened.

I set up 2 cups of LV emPCR using the same amount and same source of DNA, one worked nicely with around 9% enrichment but the other one has almost 100% enrichment. The emulsion didn't appear to be broken and we couldn't figure out what was happening. We repeated the emPCR with the same amount of DNA and same thing happened again. Does anyone has any idea what's going on?

Thanks.

/yw
SeqMonster is offline   Reply With Quote
Old 03-17-2010, 05:19 AM   #35
pmiguel
Senior Member
 
Location: Purdue University, West Lafayette, Indiana

Join Date: Aug 2008
Posts: 2,318
Default

Quote:
Originally Posted by SeqMonster View Post
Hi guys, I just started using the rapid kit not long ago and did 2 runs from those libraries. The first one turned out to have >1million reads but with a lot of short reads and the second one has very high mixed&dots even though the enrichment is around 9%. Now, I am doing the third one and some weird thing happened.

I set up 2 cups of LV emPCR using the same amount and same source of DNA, one worked nicely with around 9% enrichment but the other one has almost 100% enrichment. The emulsion didn't appear to be broken and we couldn't figure out what was happening. We repeated the emPCR with the same amount of DNA and same thing happened again. Does anyone has any idea what's going on?

Thanks.

/yw
I don't have an answer, but it should not be possible to get 100% enrichment--there are not enough enrichment beads for this to happen. In cases where something goes badly awry with calculations we have had libraries where vastly more (eg 40X) more library than is called for, we get something like 40% enrichment.

What does GS support say?

--
Phillip
pmiguel is offline   Reply With Quote
Old 03-17-2010, 06:02 AM   #36
LMcSeq
Member
 
Location: Maryland--we have a HiSeq too!

Join Date: Feb 2009
Posts: 33
Default

SeqMonster: Keep in mind that the DNA capture beads are now much more efficient than they previously were. You may need to adjust your cpb input if you aren't doing titration to optimize. We used to use 2 cpb and now we're down to 1 and probably going lower.
LMcSeq is offline   Reply With Quote
Old 03-17-2010, 08:13 AM   #37
pmiguel
Senior Member
 
Location: Purdue University, West Lafayette, Indiana

Join Date: Aug 2008
Posts: 2,318
Default

Quote:
Originally Posted by LMcSeq View Post
SeqMonster: Keep in mind that the DNA capture beads are now much more efficient than they previously were. You may need to adjust your cpb input if you aren't doing titration to optimize. We used to use 2 cpb and now we're down to 1 and probably going lower.
Which capture beads are these? In the emPCR? Or enrichment?

Also, how are you assaying "copies"? Straight fluorimetry? Fluorimetry using the wacky dye-labeled Rapid Adaptors?

--
Phillip
pmiguel is offline   Reply With Quote
Old 03-17-2010, 10:05 AM   #38
seqAll
Member
 
Location: China

Join Date: Nov 2009
Posts: 21
Default

Quote:
Originally Posted by LMcSeq View Post
SeqMonster: Keep in mind that the DNA capture beads are now much more efficient than they previously were. You may need to adjust your cpb input if you aren't doing titration to optimize. We used to use 2 cpb and now we're down to 1 and probably going lower.
Why are the DNA capture beads much more efficient than before? Thanks!
seqAll is offline   Reply With Quote
Old 03-17-2010, 03:48 PM   #39
SeqMonster
Member
 
Location: california

Join Date: Jul 2009
Posts: 21
Default

Quote:
Originally Posted by pmiguel View Post
I don't have an answer, but it should not be possible to get 100% enrichment--there are not enough enrichment beads for this to happen. In cases where something goes badly awry with calculations we have had libraries where vastly more (eg 40X) more library than is called for, we get something like 40% enrichment.

What does GS support say?

--
Phillip
I am still providing them information, haven't get any concrete answer yet. Sorry for bad explanation. What I meant by 100% enrichment was almost all the DNA beads were bound to the enrichment beads. When the mixture was on the magnet for some time, you get a clear solution instead of normal milky solution with a lot of DNA beads.

The weird thing is I was preparing 2 cups with the same DNA and this only happen on one of the cups. The other one is perfectly enriched and sequenced.

SeqMonster
SeqMonster is offline   Reply With Quote
Old 03-17-2010, 03:53 PM   #40
SeqMonster
Member
 
Location: california

Join Date: Jul 2009
Posts: 21
Default

Quote:
Originally Posted by LMcSeq View Post
SeqMonster: Keep in mind that the DNA capture beads are now much more efficient than they previously were. You may need to adjust your cpb input if you aren't doing titration to optimize. We used to use 2 cpb and now we're down to 1 and probably going lower.
Hi LMcSeq, I actually had the other way round. Since I started to use the rapid kit, I have to use higher cpb than what I normally used, almost twice, for a good enrichment of around 10%. But the sequencing results turned out to have quite high mixed/dots %, and a lot of short reads.

Attached is the agilent trace after the library preparation and the read length from the RunBrowser.
Attached Images
File Type: jpg LIB034_readlength_GACT_chart.jpg (18.3 KB, 30 views)
Attached Files
File Type: pdf 2100 expert_High Sensitivity DNA Assay_DE72901979_2010-03-01_16-05-41.pdf (1.37 MB, 35 views)
SeqMonster is offline   Reply With Quote
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off




All times are GMT -8. The time now is 09:30 PM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2022, vBulletin Solutions, Inc.
Single Sign On provided by vBSSO