![]() |
|
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
2012 AGBT conference sold out | cosmid | Events / Conferences | 3 | 11-08-2011 07:15 AM |
SEQanswers/SEQwiki AGBT poster abstract? | dan | Wiki Discussion | 41 | 02-17-2011 11:34 PM |
2011 AGBT thread! | ECO | Events / Conferences | 8 | 02-09-2011 09:11 AM |
AGBT Roundup / News Thread | ECO | Events / Conferences | 6 | 03-02-2010 05:34 AM |
Agbt 2009 | bioinfosm | Events / Conferences | 15 | 02-20-2009 03:24 PM |
![]() |
|
Thread Tools |
![]() |
#21 | |
Guest
Posts: n/a
|
![]() Quote:
I don't even think it competes with the current sequencers. This is something that complements them very nicely, with some applications it can do uniquely. |
|
![]() |
![]() |
#22 | |||
David Eccles (gringer)
Location: Wellington, New Zealand Join Date: May 2011
Posts: 811
|
![]() Quote:
http://www.nanoporetech.com/technolo...lised-medicine Quote:
http://www.nanoporetech.com/technolo...tein-analysis- |
|||
![]() |
![]() |
![]() |
#23 |
Member
Location: Illinois Join Date: Oct 2011
Posts: 30
|
![]()
Finally! (I am being optimistic and assuming the legitimate concerns about accuracy will be adequately addressed over time).
I have always thought the true definition of "Next generation" was not amount of data but rather read length. I have been disappointed that so-called "Next gen" technologies from ILMN and Life would deliver shorter reads than Sanger but just boat-loads of them. It has always left a bad taste in my mouth. Now with PacBio, GnuBio and Nanopore, it seems like we are finally focusing on the true Next Nex-gen. 100 Mega-base reads, anyone? Of course it drastically changes the jobs of bioinformatics folks like me. And makes life kinda fun!
__________________
Kamalakar Gulukota, Director, Center for Bioinformatics and Computational Biology NorthShore University Health System, [email protected] |
![]() |
![]() |
![]() |
#24 |
Junior Member
Location: boston Join Date: Mar 2011
Posts: 1
|
![]()
It seems to me like the acceptability of a 4% error rate would depend on the sample type. One advantage of sequencing clusters (or beads) is that each read is a pretty accurate determination of sequence derived from a single template. I am a bit of a statistics moron, but it seems like if your starting material is impure (e.g. a tumor sample), that it would be easier to distinguish normal sequence from minority-contributor sequence (say 5%) if you are 99.9% sure of each base in a read than if you are 96% sure of each base in a read. While 200x coverage might be sufficient for the former, would it be sufficient for the latter?
|
![]() |
![]() |
![]() |
#25 | |
Member
Location: NC Join Date: Mar 2010
Posts: 15
|
![]() Quote:
|
|
![]() |
![]() |
![]() |
#26 | |
Junior Member
Location: San Diego Join Date: Dec 2009
Posts: 7
|
![]() Quote:
Their nanopore array is possibly the most innovative feature of their system - they have developed some sort of a synthetic polymer that replaces the lipid bilayer. So they are able to embed the protein in the polymer and array them on the chip overcoming the Poisson distribution. According to Brown, 80% of the chip is still functional after 3 days (I assume 3 days after activation). They also exposed the chip to blood and sewage and it retained functionality. The extreme stability of the synthetic polymer is what's enabling the MINIon technology really. |
|
![]() |
![]() |
![]() |
#27 | |
Junior Member
Location: Germany Join Date: Feb 2012
Posts: 4
|
![]() Quote:
If he specifically mentioned overcoming Poisson, did he say how successful that was? I would say you can tweak it some, but that you'll always have a bit of the old alea iacta stuff going on, with some no-shows and some doubles. It might make sense to ship dry and have some priming program run after activation. You clearly have to be recording in order to array the pores so you can follow them go in and it would take some time, I guess, until you have a sizeable portion of membranes with exactly a single pore. Fascinating stuff. |
|
![]() |
![]() |
![]() |
#28 |
Senior Member
Location: Oklahoma Join Date: Sep 2009
Posts: 401
|
![]()
If I recall correctly from his talk he mentioned 25% of the wells (pore binding sites?, array positions? not quite sure what to call them...) had a single pore.
edit : one could call them membranes, I suppose. |
![]() |
![]() |
![]() |
#29 | |
Shawn Baker
Location: San Diego Join Date: Aug 2008
Posts: 84
|
![]() Quote:
|
|
![]() |
![]() |
![]() |
#30 |
Junior Member
Location: Germany Join Date: Feb 2012
Posts: 4
|
![]()
25%? Not exactly overcoming Poisson (ideally 36,8% if I recall?)
|
![]() |
![]() |
![]() |
#31 | |
Member
Location: Pacific Northwest Join Date: Oct 2010
Posts: 52
|
![]() Quote:
However, the original question was whether this is acceptable for clinical applications. And this is where one has to wonder - what good is a 10kbp read if you only care about 200 bp? If you have to read the 10kbp 50x to get the right accuracy on that 200bp, then the lack of massive parallelization with ONT would seem to start working against them. This would not seem to be ONT's strong point. Am I wrong here? Finally, on the question of deconvoluting over 3 bases/64 levels. In the limited time I have played with deconvolving signals this is about on the hairy edge of what is doable with a signal with a few percent noise in it and probably explains why ONT has, for the moment, stayed away from 5+ bases. Without seeing an actual "heartbeat trace" it is difficult to judge whether the sated 95% accuracy is typical or best case. Sooner or later the data will be in the wild and we will know. For the moment it pays to remember that there are lies, damn lies, and statistics... and then there are conference papers, particularly at conferences with a strong industrial presence :-) As I said elsewhere, if only 75% of what they claim is true the achievement is still impressive. The best part for them commercially is the ability to dip your toes and play with the instrument for a relatively low upfront cost. This is also the biggest risk, as the barrier to exit are just as low as the barrier to entrance. If they have overhyped this the reaction will be swift and rather merciless. |
|
![]() |
![]() |
![]() |
#32 |
Member
Location: Oxford Join Date: Jul 2008
Posts: 24
|
![]()
BBoy. You weren't at the talk so I will clairy if I can.
You raise the point about parallelism a lot. What you fail to consider is speed. Not all sensing circuits are the same. On a Nanopore system, the chemistry is not cyclical and not synchronized, its free running. How many bases per second you can measure depends on how quickly you can read the channel and at what noise. 8k channels at 1000 bases per second per channel is in fact more data than a million sensors at 1 base per second per channel (obviously). Not all chips or circuits are the same. And there are significant constraints on the kinds of circuits you can pack onto silicon without making trade offs in speed and noise. Theres the rub in terms of sensor design. If your noise is too high, at the given sample rate, you won't be sequencing. One answer proposed elsewhere for this is to "floss" DNA and read it several times. Good, but of course if you read it 4 times that's then like running 1/4 the number of sensors once in terms of throughput. So with a Nanopore systems parallelism is only half the story -- speed (and signal to noise at that speed) are the other. Both are important, not just the parallelism. A sensor needs to be judged on both. The other important and often overlooked feature is sensor lifetime. Small volume wells or bubbles won't last very long. Minutes or hours. You won't get much data from that. Larger volume systems run for longer and give more data per chip, lowering cost per base. Any of the rules that apply to cyclical chemistries, like density of features, are not quite the same on a real time system. Nanopore systems are very different. Last edited by clivey; 02-25-2012 at 01:16 AM. Reason: Clearer wording |
![]() |
![]() |
![]() |
#33 | |
Member
Location: Oxford Join Date: Jul 2008
Posts: 24
|
![]() Quote:
What I actually said was that we use a 4:1 input multiplex for each circuit. So 4 array wells per circuit. When you poisson load 4 wells, 1 gets no pores, 1 gets 1, 1 gets 2 and 1 gets 3 - on average of course. The array can then switch the circuit to read the well with 1 pore in it, ignoring the others. So, in fact, beating poisson w.r.t the mapping of pores to circuits and ensuring every circuit is being used. So when we say 8k pores, it means just that, single pores being read. I can see the are a lot of questions and misconceptions which we can easily answer. If you have further questions please email me directly from your institutions email account. Last edited by clivey; 02-25-2012 at 01:15 AM. Reason: Clearer wording |
|
![]() |
![]() |
![]() |
#34 |
David Eccles (gringer)
Location: Wellington, New Zealand Join Date: May 2011
Posts: 811
|
![]()
Aww, I wanted to hear that the lambda sequencing was actually done by 5 passes on an Ion Proton.
Last edited by gringer; 02-25-2012 at 07:58 AM. Reason: Removed irrelevant quote that was subsequently removed |
![]() |
![]() |
![]() |
#35 | |
Senior Member
Location: Boston area Join Date: Nov 2007
Posts: 747
|
![]() Quote:
1) "The clinic" is not monolithic. You seem to use this as a poor shorthand for "calling missense/nonsense and short indel mutations". There are a number of other applications with clinical value, such as detecting CNVs, chromosomal rearrangements, and transcription states, for which 4% base calling error would be quite tolerable. 2) The 4% error rate is dominated by indels in specific contexts; for assays looking for missense/nonsense mutations outside these contexts the system might be acceptable; simply toss any reads which show an indel in the neighborhood of what you are interested in. This is a strategy not unheard of in the 454/Ion Torrent world, due to their indel issues. |
|
![]() |
![]() |
![]() |
#36 | |||
Member
Location: Pacific Northwest Join Date: Oct 2010
Posts: 52
|
![]() Quote:
However, there are certain applications where parallelism matters and others where long reads do. The trivial example is a 100-page book that is read "competitively" by 100 people reading 1 pg/min and 1 person reading 50 pg/min. If you want the contents of the whole book you definitely want the latter, it will be much easier to piece the storyline together. However, if you want the contents of a single page the former is preferable. This is something that pretty much everyone on Seqanswers appreciates. Things get a bit more interesting when you increase the size of the book, throw in errors, and introduce a random selection of segments. If you are interested in a certain short stretch of pages the statistical advantage of massively parallel short & slow reads becomes considerable. It is my understanding that this is what clinical applications are all about, and this was the context in which I made my latest remark. Several people have already stated that if ONT's technology is anywhere close to what was presented it is likely to thrive by creating its own niches of new applications. However, displacing short-reads does not seem to be one of them. Quote:
However, for certain applications the metrics are different, and this is where I find ONT's "run until" marketing a bit over the top. If you are after only a certain information then the error of the read can matter, and long reads on a randomly cleaved strand can be a disadvantage when the accuracy is <100%. Quote:
In any case, thanks for taking the time to write. Your presence in these debates is much appreciated, and very different from the approach other companies are using. |
|||
![]() |
![]() |
![]() |
#37 |
Junior Member
Location: usa Join Date: Jan 2011
Posts: 3
|
![]()
Hi Clive,
If your membranes break (or pores clog) can you destroy and reform them? I don't know your email to write to you directly. |
![]() |
![]() |
![]() |
#38 | |
Member
Location: Pacific Northwest Join Date: Oct 2010
Posts: 52
|
![]()
As Randy Pausch said
Quote:
|
|
![]() |
![]() |
![]() |
#39 |
Senior Member
Location: Oklahoma Join Date: Sep 2009
Posts: 401
|
![]() |
![]() |
![]() |
![]() |
#40 |
Member
Location: Brisbane Join Date: Aug 2010
Posts: 19
|
![]()
I've been thinking more about the disruption that the nanopore tech could cause, there's been a bit of discussion regarding the "niche" of de novo genome assembly, and granted this will be the first thing that people want to get their hands on a system to do (because of the long reads). But for me, "run until", and no library prep are what will probably make the bigger difference, you don't have to just do long reads: there's no reason not to put shorter fragments in. In our lab we do a lot of population genetics involving non-model plants and animals, this mostly involves genotyping a lot of individuals (usually at a cost of $10-30 per individual). While the cost of sequencing has been coming down the time/cost/hassle of library prep has been the main barrier for us in going NGS (having to maintain 100's of barcoded primers etc). Straight away I can see several ways that I can make restricted libraries for each individual, not bother with PCR and any associated errors, then run a Gb or two until sufficient coverage is reached and do genotyping by sequencing for close to the cost of what we currently do. The ability to do RNAseq gene expression studies in the lab without having to send off to a core facility (even if you have to make cDNA) will also be pretty awesome. I can also see applications for the USB stick in agriculture to allow field-based monitoring of resistance to insecticides/herbicides. I think that putting this kind of sequencing power straight into the hands of researchers is going to be a big game changer, and the more I think about it the less I can see myself doing a lot of the things that I currently do in the lab!
|
![]() |
![]() |
![]() |
Tags |
agbt tech |
Thread Tools | |
|
|