SEQanswers

Go Back   SEQanswers > Sequencing Technologies/Companies > The Pipeline



Similar Threads
Thread Thread Starter Forum Replies Last Post
2012 AGBT conference sold out cosmid Events / Conferences 3 11-08-2011 08:15 AM
SEQanswers/SEQwiki AGBT poster abstract? dan Wiki Discussion 41 02-18-2011 12:34 AM
2011 AGBT thread! ECO Events / Conferences 8 02-09-2011 10:11 AM
AGBT Roundup / News Thread ECO Events / Conferences 6 03-02-2010 06:34 AM
Agbt 2009 bioinfosm Events / Conferences 15 02-20-2009 04:24 PM

Reply
 
Thread Tools
Old 02-19-2012, 11:25 AM   #21
SeqAA
Guest
 

Posts: n/a
Default

Quote:
Originally Posted by larissa View Post
Even with 4% error rates? Will they deliver in improving that? That's not acceptable for clinical use. It may be very useful for a lot of other stuff.
I wouldn't say ONT is really going for that market yet.

I don't even think it competes with the current sequencers. This is something that complements them very nicely, with some applications it can do uniquely.
  Reply With Quote
Old 02-19-2012, 01:01 PM   #22
gringer
David Eccles (gringer)
 
Location: Wellington, New Zealand

Join Date: May 2011
Posts: 799
Default

Quote:
Originally Posted by SeqAA View Post
Quote:
That's not acceptable for clinical use. It may be very useful for a lot of other stuff.
I wouldn't say ONT is really going for that market yet.
The website is surprisingly useful for answering questions about many things related to their method and intentions (hence my numerous references to it). For example, they are definitely considering clinical uses for their nanopore system:

http://www.nanoporetech.com/technolo...lised-medicine

Quote:
Use of the GridION platform in Personalised Healthcare
The GridION platform is an electronic analysis system that can be tailored for the analysis of DNA, RNA, protein and other analytes. This novel technology has applications across personalised healthcare. this may include the analysis of a patient's DNA, discovery and validation of new protein biomarkers or an electronic diagnostic test for discovered biomarkers.
In the clinical setting, it looks like they're putting a bit of effort into a more direct protein identification (via aptamers), which is likely to be more useful for diagnostic and monitoring purposes when compared to the mostly unchanging DNA.

http://www.nanoporetech.com/technolo...tein-analysis-
gringer is offline   Reply With Quote
Old 02-20-2012, 08:02 AM   #23
kgulukota
Member
 
Location: Illinois

Join Date: Oct 2011
Posts: 30
Default

Finally! (I am being optimistic and assuming the legitimate concerns about accuracy will be adequately addressed over time).

I have always thought the true definition of "Next generation" was not amount of data but rather read length. I have been disappointed that so-called "Next gen" technologies from ILMN and Life would deliver shorter reads than Sanger but just boat-loads of them. It has always left a bad taste in my mouth.

Now with PacBio, GnuBio and Nanopore, it seems like we are finally focusing on the true Next Nex-gen. 100 Mega-base reads, anyone? Of course it drastically changes the jobs of bioinformatics folks like me. And makes life kinda fun!
__________________
Kamalakar Gulukota,
Director,
Center for Bioinformatics and Computational Biology
NorthShore University Health System, [email protected]
kgulukota is offline   Reply With Quote
Old 02-21-2012, 05:37 AM   #24
joss211
Junior Member
 
Location: boston

Join Date: Mar 2011
Posts: 1
Default

It seems to me like the acceptability of a 4% error rate would depend on the sample type. One advantage of sequencing clusters (or beads) is that each read is a pretty accurate determination of sequence derived from a single template. I am a bit of a statistics moron, but it seems like if your starting material is impure (e.g. a tumor sample), that it would be easier to distinguish normal sequence from minority-contributor sequence (say 5%) if you are 99.9% sure of each base in a read than if you are 96% sure of each base in a read. While 200x coverage might be sufficient for the former, would it be sufficient for the latter?
joss211 is offline   Reply With Quote
Old 02-21-2012, 06:57 AM   #25
larissa
Member
 
Location: NC

Join Date: Mar 2010
Posts: 15
Default

Quote:
Originally Posted by joss211 View Post
It seems to me like the acceptability of a 4% error rate would depend on the sample type. One advantage of sequencing clusters (or beads) is that each read is a pretty accurate determination of sequence derived from a single template. I am a bit of a statistics moron, but it seems like if your starting material is impure (e.g. a tumor sample), that it would be easier to distinguish normal sequence from minority-contributor sequence (say 5%) if you are 99.9% sure of each base in a read than if you are 96% sure of each base in a read. While 200x coverage might be sufficient for the former, would it be sufficient for the latter?
That's one of the reasons why 4% error rate is not acceptable in the clinic, for guiding treatment/dosing and inclusion/exclusion criteria. It will be acceptable for lots of other things, all R&D oriented, in academia and private sector.
larissa is offline   Reply With Quote
Old 02-21-2012, 07:21 AM   #26
Nanoporous
Junior Member
 
Location: San Diego

Join Date: Dec 2009
Posts: 7
Default

Quote:
Originally Posted by nxgsqq View Post
I too agree the error is acceptable, IF, it is indeed the typical error they see and not a 'best case' presented for effect and advertisement. I am suspicious of resolving 64 levels (3 base read) electronically considering how small the differences will be. I don't completely buy the algorithmic deconvolution either, especially if they are still using a polymerase. If it is a non-stochasitic transport, a Viterbi/HMM algorithm might give 94% accuracy.

The bigger unknown is the true, customer usability of their pores. I would assume they are Poisson loading the pores before they ship to users. How many pores are still active after an hour of use? I know the bilayers can be made stable and inert, I can accept the error profile can be made length independent (especially if they tether the motor to the pore, else Brownian motion of long DNA can act to pull the complex off, even against the electric field), but how many pores are sequening at a given time and how does that number drop off over time?
They are certainly not using a polymerase; they have developed their own 'motor-protein enzyme' by screening 300+ mutations of some natural enzyme (possibly a helicase?).

Their nanopore array is possibly the most innovative feature of their system - they have developed some sort of a synthetic polymer that replaces the lipid bilayer. So they are able to embed the protein in the polymer and array them on the chip overcoming the Poisson distribution. According to Brown, 80% of the chip is still functional after 3 days (I assume 3 days after activation). They also exposed the chip to blood and sewage and it retained functionality. The extreme stability of the synthetic polymer is what's enabling the MINIon technology really.
Nanoporous is offline   Reply With Quote
Old 02-21-2012, 11:16 AM   #27
Pongo_T
Junior Member
 
Location: Germany

Join Date: Feb 2012
Posts: 4
Default

Quote:
Originally Posted by Nanoporous View Post
Their nanopore array is possibly the most innovative feature of their system - they have developed some sort of a synthetic polymer that replaces the lipid bilayer. So they are able to embed the protein in the polymer and array them on the chip overcoming the Poisson distribution.
.

If he specifically mentioned overcoming Poisson, did he say how successful that was? I would say you can tweak it some, but that you'll always have a bit of the old alea iacta stuff going on, with some no-shows and some doubles. It might make sense to ship dry and have some priming program run after activation. You clearly have to be recording in order to array the pores so you can follow them go in and it would take some time, I guess, until you have a sizeable portion of membranes with exactly a single pore. Fascinating stuff.
Pongo_T is offline   Reply With Quote
Old 02-21-2012, 11:33 AM   #28
GW_OK
Senior Member
 
Location: Oklahoma

Join Date: Sep 2009
Posts: 383
Default

If I recall correctly from his talk he mentioned 25% of the wells (pore binding sites?, array positions? not quite sure what to call them...) had a single pore.

edit : one could call them membranes, I suppose.
GW_OK is offline   Reply With Quote
Old 02-21-2012, 11:45 AM   #29
scbaker
Shawn Baker
 
Location: San Diego

Join Date: Aug 2008
Posts: 84
Default

Quote:
Originally Posted by kgulukota View Post
Finally! (I am being optimistic and assuming the legitimate concerns about accuracy will be adequately addressed over time).

I have always thought the true definition of "Next generation" was not amount of data but rather read length. I have been disappointed that so-called "Next gen" technologies from ILMN and Life would deliver shorter reads than Sanger but just boat-loads of them. It has always left a bad taste in my mouth.

Now with PacBio, GnuBio and Nanopore, it seems like we are finally focusing on the true Next Nex-gen. 100 Mega-base reads, anyone? Of course it drastically changes the jobs of bioinformatics folks like me. And makes life kinda fun!
Is there any evidence that either PacBio or GnuBIO could reach ultra long reads? PacBio is making improvements in read length, but at the pace they're going, it doesn't seem like they'd ever reach anything approaching 1Mb. As for GnuBIO, how large of an n-mer would they need to be able to sequence a whole human genome with their SBH approach?
scbaker is offline   Reply With Quote
Old 02-21-2012, 03:59 PM   #30
Pongo_T
Junior Member
 
Location: Germany

Join Date: Feb 2012
Posts: 4
Default

Quote:
Originally Posted by GW_OK View Post
If I recall correctly from his talk he mentioned 25% of the wells (pore binding sites?, array positions? not quite sure what to call them...) had a single pore.

edit : one could call them membranes, I suppose.
25%? Not exactly overcoming Poisson (ideally 36,8% if I recall?)
Pongo_T is offline   Reply With Quote
Old 02-24-2012, 10:47 PM   #31
BBoy
Member
 
Location: Pacific Northwest

Join Date: Oct 2010
Posts: 52
Default

Quote:
Originally Posted by clivey View Post
Thank you for spotting that, it's nice to meet someone with insight.

C.
Yes, that is an important but fairly trivial point which does not seem to have precluded people from skewering previous single-molecule technologies, nor investors for suing the respective companies. The averaging over hundreds of thousands of strands at each site is the very same thing that is liming read lengths in the cluster technologies.

However, the original question was whether this is acceptable for clinical applications. And this is where one has to wonder - what good is a 10kbp read if you only care about 200 bp? If you have to read the 10kbp 50x to get the right accuracy on that 200bp, then the lack of massive parallelization with ONT would seem to start working against them. This would not seem to be ONT's strong point. Am I wrong here?

Finally, on the question of deconvoluting over 3 bases/64 levels. In the limited time I have played with deconvolving signals this is about on the hairy edge of what is doable with a signal with a few percent noise in it and probably explains why ONT has, for the moment, stayed away from 5+ bases. Without seeing an actual "heartbeat trace" it is difficult to judge whether the sated 95% accuracy is typical or best case. Sooner or later the data will be in the wild and we will know. For the moment it pays to remember that there are lies, damn lies, and statistics... and then there are conference papers, particularly at conferences with a strong industrial presence :-)

As I said elsewhere, if only 75% of what they claim is true the achievement is still impressive. The best part for them commercially is the ability to dip your toes and play with the instrument for a relatively low upfront cost. This is also the biggest risk, as the barrier to exit are just as low as the barrier to entrance. If they have overhyped this the reaction will be swift and rather merciless.
BBoy is offline   Reply With Quote
Old 02-25-2012, 12:31 AM   #32
clivey
Member
 
Location: Oxford

Join Date: Jul 2008
Posts: 24
Default

BBoy. You weren't at the talk so I will clairy if I can.

You raise the point about parallelism a lot. What you fail to consider is speed. Not all sensing circuits are the same. On a Nanopore system, the chemistry is not cyclical and not synchronized, its free running. How many bases per second you can measure depends on how quickly you can read the channel and at what noise. 8k channels at 1000 bases per second per channel is in fact more data than a million sensors at 1 base per second per channel (obviously). Not all chips or circuits are the same. And there are significant constraints on the kinds of circuits you can pack onto silicon without making trade offs in speed and noise. Theres the rub in terms of sensor design. If your noise is too high, at the given sample rate, you won't be sequencing. One answer proposed elsewhere for this is to "floss" DNA and read it several times. Good, but of course if you read it 4 times that's then like running 1/4 the number of sensors once in terms of throughput.

So with a Nanopore systems parallelism is only half the story -- speed (and signal to noise at that speed) are the other. Both are important, not just the parallelism. A sensor needs to be judged on both.

The other important and often overlooked feature is sensor lifetime. Small volume wells or bubbles won't last very long. Minutes or hours. You won't get much data from that. Larger volume systems run for longer and give more data per chip, lowering cost per base.

Any of the rules that apply to cyclical chemistries, like density of features, are not quite the same on a real time system. Nanopore systems are very different.

Last edited by clivey; 02-25-2012 at 02:16 AM. Reason: Clearer wording
clivey is offline   Reply With Quote
Old 02-25-2012, 12:43 AM   #33
clivey
Member
 
Location: Oxford

Join Date: Jul 2008
Posts: 24
Default

Quote:
Originally Posted by GW_OK View Post
If I recall correctly from his talk he mentioned 25% of the wells (pore binding sites?, array positions? not quite sure what to call them...) had a single pore.

edit : one could call them membranes, I suppose.
You guys ! Tsk!

What I actually said was that we use a 4:1 input multiplex for each circuit. So 4 array wells per circuit. When you poisson load 4 wells, 1 gets no pores, 1 gets 1, 1 gets 2 and 1 gets 3 - on average of course. The array can then switch the circuit to read the well with 1 pore in it, ignoring the others. So, in fact, beating poisson w.r.t the mapping of pores to circuits and ensuring every circuit is being used. So when we say 8k pores, it means just that, single pores being read.

I can see the are a lot of questions and misconceptions which we can easily answer. If you have further questions please email me directly from your institutions email account.

Last edited by clivey; 02-25-2012 at 02:15 AM. Reason: Clearer wording
clivey is offline   Reply With Quote
Old 02-25-2012, 01:04 AM   #34
gringer
David Eccles (gringer)
 
Location: Wellington, New Zealand

Join Date: May 2011
Posts: 799
Default

Quote:
Originally Posted by clivey View Post
I think this is the last misconception I'm going to answer here. If you have further questions or assertions please email me directly from your institutions email account.
Aww, I wanted to hear that the lambda sequencing was actually done by 5 passes on an Ion Proton.

Last edited by gringer; 02-25-2012 at 08:58 AM. Reason: Removed irrelevant quote that was subsequently removed
gringer is offline   Reply With Quote
Old 02-25-2012, 07:27 AM   #35
krobison
Senior Member
 
Location: Boston area

Join Date: Nov 2007
Posts: 747
Default

Quote:
Originally Posted by larissa View Post
That's one of the reasons why 4% error rate is not acceptable in the clinic, for guiding treatment/dosing and inclusion/exclusion criteria. It will be acceptable for lots of other things, all R&D oriented, in academia and private sector.
Two things to note

1) "The clinic" is not monolithic. You seem to use this as a poor shorthand for "calling missense/nonsense and short indel mutations". There are a number of other applications with clinical value, such as detecting CNVs, chromosomal rearrangements, and transcription states, for which 4% base calling error would be quite tolerable.

2) The 4% error rate is dominated by indels in specific contexts; for assays looking for missense/nonsense mutations outside these contexts the system might be acceptable; simply toss any reads which show an indel in the neighborhood of what you are interested in. This is a strategy not unheard of in the 454/Ion Torrent world, due to their indel issues.
krobison is offline   Reply With Quote
Old 02-25-2012, 04:14 PM   #36
BBoy
Member
 
Location: Pacific Northwest

Join Date: Oct 2010
Posts: 52
Default

Quote:
Originally Posted by clivey View Post
BBoy. You weren't at the talk so I will clairy if I can.

You raise the point about parallelism a lot. What you fail to consider is speed.
Let me clarify that I appreciate the speed and asynchronous nature of the readout very much, and perhaps by continuously nitpicking on paralelism I am leaving the wrong impression of how I view ONT's announcement. The whole technology is rather impressive sounding, even if (probably) somewhat puffed up for presentation reasons :smile:

However, there are certain applications where parallelism matters and others where long reads do. The trivial example is a 100-page book that is read "competitively" by 100 people reading 1 pg/min and 1 person reading 50 pg/min. If you want the contents of the whole book you definitely want the latter, it will be much easier to piece the storyline together. However, if you want the contents of a single page the former is preferable. This is something that pretty much everyone on Seqanswers appreciates.

Things get a bit more interesting when you increase the size of the book, throw in errors, and introduce a random selection of segments. If you are interested in a certain short stretch of pages the statistical advantage of massively parallel short & slow reads becomes considerable. It is my understanding that this is what clinical applications are all about, and this was the context in which I made my latest remark. Several people have already stated that if ONT's technology is anywhere close to what was presented it is likely to thrive by creating its own niches of new applications. However, displacing short-reads does not seem to be one of them.

Quote:
Originally Posted by clivey View Post
So with a Nanopore systems parallelism is only half the story -- speed (and signal to noise at that speed) are the other. Both are important, not just the parallelism. A sensor needs to be judged on both.
Absolutely. Both = throughput, and for many applications this is all that matters. Walmart makes a ton of money over billions of transactions where they make a few cents on each. Oracle makes its money by making thousands of dollars on thousands of transactions.

However, for certain applications the metrics are different, and this is where I find ONT's "run until" marketing a bit over the top. If you are after only a certain information then the error of the read can matter, and long reads on a randomly cleaved strand can be a disadvantage when the accuracy is <100%.

Quote:
Originally Posted by clivey View Post
The other important and often overlooked feature is sensor lifetime. Small volume wells or bubbles won't last very long. Minutes or hours. You won't get much data from that. Larger volume systems run for longer and give more data per chip, lowering cost per base.
This is not inherent, but a characteristic of the particular sensor technology. I suspect that you are referring to nanopore technology specifically.

In any case, thanks for taking the time to write. Your presence in these debates is much appreciated, and very different from the approach other companies are using.
BBoy is offline   Reply With Quote
Old 02-25-2012, 06:04 PM   #37
nxgsqq
Junior Member
 
Location: usa

Join Date: Jan 2011
Posts: 3
Default

Hi Clive,

If your membranes break (or pores clog) can you destroy and reform them? I don't know your email to write to you directly.
nxgsqq is offline   Reply With Quote
Old 02-25-2012, 06:38 PM   #38
BBoy
Member
 
Location: Pacific Northwest

Join Date: Oct 2010
Posts: 52
Default

Quote:
Originally Posted by nxgsqq View Post
I don't know your email to write to you directly.
As Randy Pausch said

Quote:
The brick walls are there for a reason. The brick walls are not there to keep us out. The brick walls are there to give us a chance to show how badly we want something. Because the brick walls are there to stop the people who don’t want it badly enough. They’re there to stop the other people.
You can either PM him here for his email, try through LinkedIn and the additional information he provides there, or just guess. If you know one email address at a company you more or less know them all, they typically conform to a strict pattern.
BBoy is offline   Reply With Quote
Old 02-26-2012, 09:37 AM   #39
GW_OK
Senior Member
 
Location: Oklahoma

Join Date: Sep 2009
Posts: 383
Default

Quote:
Originally Posted by clivey View Post
You guys ! Tsk!
In my defense you were talking crazy fast (and in your defense you only had 17 minutes of talk time).

I don't suppose there's any chance of the AGBT talk being released and/or re-done for release?
GW_OK is offline   Reply With Quote
Old 02-26-2012, 03:56 PM   #40
JamesH
Member
 
Location: Brisbane

Join Date: Aug 2010
Posts: 19
Default

I've been thinking more about the disruption that the nanopore tech could cause, there's been a bit of discussion regarding the "niche" of de novo genome assembly, and granted this will be the first thing that people want to get their hands on a system to do (because of the long reads). But for me, "run until", and no library prep are what will probably make the bigger difference, you don't have to just do long reads: there's no reason not to put shorter fragments in. In our lab we do a lot of population genetics involving non-model plants and animals, this mostly involves genotyping a lot of individuals (usually at a cost of $10-30 per individual). While the cost of sequencing has been coming down the time/cost/hassle of library prep has been the main barrier for us in going NGS (having to maintain 100's of barcoded primers etc). Straight away I can see several ways that I can make restricted libraries for each individual, not bother with PCR and any associated errors, then run a Gb or two until sufficient coverage is reached and do genotyping by sequencing for close to the cost of what we currently do. The ability to do RNAseq gene expression studies in the lab without having to send off to a core facility (even if you have to make cDNA) will also be pretty awesome. I can also see applications for the USB stick in agriculture to allow field-based monitoring of resistance to insecticides/herbicides. I think that putting this kind of sequencing power straight into the hands of researchers is going to be a big game changer, and the more I think about it the less I can see myself doing a lot of the things that I currently do in the lab!
JamesH is offline   Reply With Quote
Reply

Tags
agbt tech

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off




All times are GMT -8. The time now is 10:18 PM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2017, vBulletin Solutions, Inc.
Single Sign On provided by vBSSO