SEQanswers

Go Back   SEQanswers > Events / Conferences



Similar Threads
Thread Thread Starter Forum Replies Last Post
iDEA Conference (June 14-15, 2011, San Diego) illumina Events / Conferences 0 04-06-2011 04:29 PM
FREE pass to CHI conference next week in San Diego, CA ECO Events / Conferences 8 03-15-2010 08:24 AM
CHI Next Generation Sequencing, San Diego March 17-19, 2009 natgenex Events / Conferences 23 03-27-2009 08:26 AM
Live Blogging from Marco Island Next Gen Sequencing conference? pmcget Events / Conferences 12 08-01-2008 05:53 AM
CHI: Next Gen Sequencing Technologies (April 23 - 24, San Diego, CA) ECO Events / Conferences 3 04-23-2008 03:41 PM

Reply
 
Thread Tools
Old 04-22-2008, 10:02 PM   #1
ECO
--Site Admin--
 
Location: SF Bay Area, CA, USA

Join Date: Oct 2007
Posts: 1,355
Default Live Blogging from the CHI Next Gen Conference! (San Diego, CA): Updated Day 2

The CHI Next Gen Sequencing Conference started late today, and I will be there for the next two days trying to report as live as possible! Please feel free to post here with your thoughts or questions about what's going on! Should be a good time...Helicos is first tomorrow morning!

I know of at least a few other SEQanswers members that will be there...hopefully I'll get to meet a few of you. Post or PM me if you're planning on being there!

Details are below!

*********************************************
Cambridge Healthtech Institute's Second Annual...

Next Generation Sequencing Technologies
Platforms, Applications, and Case Studies
April 23 - 24, 2008 * Hilton San Diego Resort * San Diego, CA


http://www.healthtech.com/seq/overview.aspx?c=542

*********************************************
ECO is offline   Reply With Quote
Old 04-23-2008, 12:25 PM   #2
ECO
--Site Admin--
 
Location: SF Bay Area, CA, USA

Join Date: Oct 2007
Posts: 1,355
Default

So I'm here with not much groundbreaking to report. The morning had Helicos and the usual suspects (Illumina and 454) with the notable exception of ABI..."who only found out about this conference a week ago!"

Some excellent panel discussions were had, I'll put up the highlights later.

I did however get to meet a few faces from the forum, including Mike (mcariaso) from WikiLIMS/Bioteam, Julia (JKK) from In-Sequence, and Kevin McCarthy from Danaher-Motion. Nice meeting you all!
ECO is offline   Reply With Quote
Old 04-23-2008, 02:53 PM   #3
Chipper
Senior Member
 
Location: Sweden

Join Date: Mar 2008
Posts: 324
Default

Hi,
did Helicos present any new data about sequencing capacity or comment on the current product vs what was used for the Science publication?
Chipper is offline   Reply With Quote
Old 04-23-2008, 03:26 PM   #4
ECO
--Site Admin--
 
Location: SF Bay Area, CA, USA

Join Date: Oct 2007
Posts: 1,355
Default

Hey Chipper....

They did. The presenter (their VP/CSO Patrice Milos) stated that the data in the M13 paper was from a system "18 months old", and their new system has much improved chemistry utilizing "virtual terminators" to improve homopolymer performance and extend read lengths. Not really any further details on the chemistry improvements.

They did present very preliminary data showing the Helicos mate pair approaches, one of which is novel. Basically they run an initial sequencing run "upward" from their oligo-(dT) primer for 25-30 nucleotides, then perform a "dark fill" which is controlled incorporation of many unlabeled nucleotides (here...50-100), followed by another sequencing run in the same orientation further "upward" to generate two short reads physically separated by a known number of nucleotides (defined by the size of the dark fill).

And they also have a more traditional mate pair approach that reads up, then back down. This requires more complex sample prep with a ligated adapter in addition to their standard poly-A tailing.
ECO is offline   Reply With Quote
Old 04-23-2008, 03:38 PM   #5
ECO
--Site Admin--
 
Location: SF Bay Area, CA, USA

Join Date: Oct 2007
Posts: 1,355
Default

Details on obtaining the presentations (very few are available) are available in the attachment.
Attached Files
File Type: txt 2008-04_CHI Next Gen Presentation Info.txt (102 Bytes, 121 views)
ECO is offline   Reply With Quote
Old 04-23-2008, 08:33 PM   #6
ECO
--Site Admin--
 
Location: SF Bay Area, CA, USA

Join Date: Oct 2007
Posts: 1,355
Default Final Day 1 updates

Sequencing via just looking at the bases!

Why didn't we think of that!

Another really neat talk was that of William Glover of ZS Genetics. Via labeling nucleotides with single atoms of varying density (examples were bromination and iodination), and stretching DNA out on a surface, they are able to actually image the sequence using transmission electron microscopes.

Not that much in the way of data, but the presenter was clearly excited that they would use commodity hardware, and achieve read lengths of over 8kb!!

Pacific Biosciences continues to dazzle...


Stephen Turner's talk was absolutely phenomenal. I did miss Marco Island (fireworks and all), but this presentation was not only delivered with unprecedented eloquence and clarity, but was filled with actual convincing proof of principle data. Real time image data showing their "Zero Mode Waveguides" idling in the presence of only 3 nucleotides, then upon addition of the fourth nucleotide thousands of wells jump alive making individual incorporations detected by the system. There was also real data demonstrating potential read lengths, showing >12 sequential passes around a 135bp minicircle....resulting in >1500 bases of sequence. Not to mention that's instantly 12x coverage of that molecule.

ST openly states that the first machines aren't scheduled to ship for 2 years from now...but when they do...watch out bioinformatics guys. They have 5 year line of sight to ~100Gb/hour. That's 15 minutes to a human genome. There is no cycling, no pausing, no terminators, no fluor removal, just push the "go" button and watch the polymerase do its thing.

Real time, single molecule detection with this technology is nothing short of amazing.

The Polonator: Academics Gone Wild...

We also got a detailed description of the Polonator's inner workings and the chemistry that the instrument has been "launched" with. I put quotes around it because it seems clear that not many people will run with this chemistry, but Danaher and Church encourage development of alternate chemistries through the "open source" nature of the Polonator.

My negative impression of the initial chemistry is due solely to the fact that there is approximately a lifetimes worth of molecular biology required up front, including three (yes...THREE), amplification steps. This for 26 bases of paired end sequence, not 26x2, but 6+7 and 6+7 from each of the mate pairs. The much abbreviated workflow is as such:
  • Shear to ~1kb
  • Size Select
  • Blunt, A-tail
  • Ligate 30-mer
  • T-tail
  • Circularize via dilute ligation
  • Run rolling circle amplification (amp #1)
  • Digest with MmeI (tagging enzyme, type IIS RE), blunt
  • ligate F&R linkers
  • Limited PCR to enrich for ligation products (amp #2)
  • Setup and Run emulsion PCR (amp #3)
  • Capture successfully amplified beads with another larger set of capture beads
  • Deposit beads on flow cell and start sequencing using ligation based chemistry a la Agencourt-->ABI SOLiD.
  • Run 80 hours to obtain 6+7 bases of sequence from each of the two mate pair tags.
Whew. I was exhausted writing it down and now typing it, I can't imagine what it's like to run. It's like 454 + SOLiD on steroids.

Overall it was a great day, I really enjoyed myself.
ECO is offline   Reply With Quote
Old 04-24-2008, 10:00 AM   #7
bioinfosm
Senior Member
 
Location: USA

Join Date: Jan 2008
Posts: 482
Default

thanks eco for sharing all this info
bioinfosm is offline   Reply With Quote
Old 04-24-2008, 01:06 PM   #8
ECO
--Site Admin--
 
Location: SF Bay Area, CA, USA

Join Date: Oct 2007
Posts: 1,355
Default

No prob bioinfosm....hopefully it's helpful!

The morning of Day 2 was the Genome Center Battle. We heard details about pipelines and processes from Broad, Baylor, JGI, JCVI, and WashU's GSC. Many of the challenges that they have worked out really are important for smaller labs to think about as they get into the "Now"-generation sequencing space. Reproducibility, contamination, QC of data, analysis, storage, archival storage, should be a part of any lab's setup plan.

Broad's presentation was the most impressive, seeming more out of a
six-sigma optimized semiconductor fab facility than a mostly human-driven wet molecular biology lab. Something not often seen in molbio research space. The process supervisor really put together a nice story about their pipeline and how they handle and track seemingly "routine" processes on a scale that is like no other in the world (20+ Solexa GAII-PE machines!).

Some of the QC assays the centers are implementing were interesting, including a qPCR assay for quantitating libraries to ensure appropriate ratios of fragments:beads (this has been published). Also an "in situ" quantitation of cluster density directly on a Solexa flow cell, via staining with Sybrgreen and imaging analysis. This allows them to save tremendous amounts of sequencing reagents by trashing sub-optimal flow cells.

Many if not all of the centers presented comparisons of the costs of running each of these instruments....the most well articulated was from Vincent Magrini from WashU, however none were clear if they included labor costs:

454: $6900 per run for 400Mb
Illumina: $9300 per run for 2Gb
SOLiD: $15,000 per run for 4Gb

Many of the centers are also using combined short/long/Sanger read workflows for genome finishing.
ECO is offline   Reply With Quote
Old 04-25-2008, 05:18 AM   #9
DNAcowboy
Member
 
Location: france

Join Date: Apr 2008
Posts: 31
Default

Quote:
Originally Posted by ECO View Post
Details on obtaining the presentations (very few are available) are available in the attachment.
That's really, really an awesome link. Thanx so much for sharing this info.
DNAcowboy is offline   Reply With Quote
Old 04-25-2008, 05:36 AM   #10
DNAcowboy
Member
 
Location: france

Join Date: Apr 2008
Posts: 31
Default

Quote:
Originally Posted by ECO View Post
No prob bioinfosm....hopefully it's helpful!
454: $6900 per run for 400Mb
Illumina: $9300 per run for 2Gb
SOLiD: $15,000 per run for 4Gb
Are these cost nbrs recent? Mine are extra fresh from the day from AB Europe. Price list for reagents/cost excl. VAT

SOLID 4Gb
Slide 1 "section": €2600->2600/section
Slide 4: €3700->~900/section
Slide 8: €5300->~670/section

SOLID mate-paired 6Gb
Slide 1: €3650->3650/section
Slide 4: €4700->~1177/section
Slide 8: €6300->~791/section

SOLID instr. ~k€460

Last edited by DNAcowboy; 04-25-2008 at 05:39 AM.
DNAcowboy is offline   Reply With Quote
Old 04-25-2008, 07:05 AM   #11
chris
Member
 
Location: Dundee, Scotland

Join Date: Apr 2008
Posts: 52
Default

This is great stuff, ECO.

Have there been any mentions of the sequence compression found in 454? Is there any improvement in Solexa read length?

Also, how much is being discussed about the Bioinformatics back-up following the sequencing: e.g. which tools are people using?
Cheers,

Chris
chris is offline   Reply With Quote
Old 04-25-2008, 08:03 AM   #12
ECO
--Site Admin--
 
Location: SF Bay Area, CA, USA

Join Date: Oct 2007
Posts: 1,355
Default

Quote:
Originally Posted by DNAcowboy View Post
That's really, really an awesome link. Thanx so much for sharing this info.
Anytime! Hopefully CHI doesn't bring the smackdown on me.

Quote:
Originally Posted by DNAcowboy View Post
Are these cost nbrs recent? Mine are extra fresh from the day from AB Europe. Price list for reagents/cost excl. VAT

SOLID 4Gb
Slide 1 "section": €2600->2600/section
Slide 4: €3700->~900/section
Slide 8: €5300->~670/section

SOLID mate-paired 6Gb
Slide 1: €3650->3650/section
Slide 4: €4700->~1177/section
Slide 8: €6300->~791/section

SOLID instr. ~k€460
This was from the guy's talk from WashU...could definitely be older. ABI wasn't there presenting so there was no rebuttal or corrections.

Quote:
Originally Posted by chris View Post
This is great stuff, ECO.

Have there been any mentions of the sequence compression found in 454? Is there any improvement in Solexa read length?

Also, how much is being discussed about the Bioinformatics back-up following the sequencing: e.g. which tools are people using?
Cheers,

Chris
There were really no detailed homopolymer/454 discussions, seems that's mostly been worked out, and/or avoided with lots of coverage. They did briefly mention that their new chemistry will improve that performance in addition to giving 400bp reads (700mb total, 10 hour run).

Solexa read length was almost universally reported as 36bp. But they have just launched the "GA II" which is an upgrade that purports >50bp reads. But I don't think anyone was running it yet. There is also an external module required to do paired ends on the Solexa...it's basically an additional reagent pump station.

In terms of detailed discussions of backups, there wasn't much. Brian O'conner discussed what they do at UCLA, which is use a combination of Xsan storage for the reads themselves (~150Gb/run ~$1700/TB), and PogoLinux boxes for the images and other stuff (~450Gb/run ~$600/TB). Everyone is still saving the images as far as I can tell, but they would of course love to avoid it. I think it was PacBio that discussed a bitwise compressed format for their "traces" which reduces all the information to 48 bytes per base.

I forgot to mention the serious smackdown given to Affy by Stephen Kingsmore, his talk is included in the list. Basically he did complete comparisons between Illumina DGE and Affy expression analyses, I'll leave it to you to determine what the results showed! The best part of the conference however was during the question/answer period for Kingsmore's talk (which had just finished slamming Affy), when a very quiet mild-mannered guy stood up and said that he was the guy who designed all the Affy Human Arrays! It was one of those tense quiet/gasp moments for the crowd. He immediately pointed out that based on the genes and coverage reported for Affy, that it was obvious that the researcher had used the outdated previous generation (U133, >5 years old) of Affy chips, which have been upgraded significantly since. Either way...I think Affy is in deep trouble.

Woohoo, happy friday people!
ECO is offline   Reply With Quote
Old 04-27-2008, 06:29 AM   #13
ScottC
Senior Member
 
Location: Monash University, Melbourne, Australia.

Join Date: Jan 2008
Posts: 246
Default

Quote:
Originally Posted by ECO View Post
Sequencing via just looking at the bases!

Why didn't we think of that!

Another really neat talk was that of William Glover of ZS Genetics. Via labeling nucleotides with single atoms of varying density (examples were bromination and iodination), and stretching DNA out on a surface, they are able to actually image the sequence using transmission electron microscopes.

Not that much in the way of data, but the presenter was clearly excited that they would use commodity hardware, and achieve read lengths of over 8kb!!
Yeah, that was an interesting talk... although I did like the point that one of the audience members made during question time about how they intend to make their money. I don't think that question was answered very well... he basically just said they'd sell a high-spec scope, then there were next-to-no consumables, that the 'chips' (or whatever they're calling the solid supports) will also cost an insignificant amount because the semiconducter industry already has the fabrication process under control.

Interesting, nonetheless... if it works. As you pointed out, there was really no data there, other than a sketchy EM of some unreadable partially labelled DNA.

Scott.

Last edited by ScottC; 04-27-2008 at 06:32 AM.
ScottC is offline   Reply With Quote
Old 04-27-2008, 06:38 AM   #14
ScottC
Senior Member
 
Location: Monash University, Melbourne, Australia.

Join Date: Jan 2008
Posts: 246
Default

Quote:
Originally Posted by ECO View Post

Solexa read length was almost universally reported as 36bp. But they have just launched the "GA II" which is an upgrade that purports >50bp reads. But I don't think anyone was running it yet. There is also an external module required to do paired ends on the Solexa...it's basically an additional reagent pump station.
Yeah, I think people are using the 50bp reads, but not many. I don't think they're shipping the 50bp kits yet, are they? The broad are doing it, of course.

Quote:
Originally Posted by ECO View Post
I forgot to mention the serious smackdown given to Affy by Stephen Kingsmore, his talk is included in the list.
As you mentioned, the Affy designer that stood up at the end did some pretty good 'smackdown' in return. I thought that was pretty bad form of them to basically stand up and bad-mouth the product, and not to say that they're using outdated products! If they didn't know they were using outdated products... well surely that's even worse. I thought he defended himself pretty well.


Overall, as a researcher in bacterial pathogenesis, I was a bit disappointed that there wasn't more prokaryotic work presented, or more de novo assembly information presented.

Scott (recovering from 15 hour flights... ugh).
ScottC is offline   Reply With Quote
Old 04-28-2008, 02:52 AM   #15
chris
Member
 
Location: Dundee, Scotland

Join Date: Apr 2008
Posts: 52
Default

Quote:
Originally Posted by ECO View Post
In terms of detailed discussions of backups, there wasn't much. Brian O'conner discussed what they do at UCLA, which is use a combination of Xsan storage for the reads themselves (~150Gb/run ~$1700/TB), and PogoLinux boxes for the images and other stuff (~450Gb/run ~$600/TB). Everyone is still saving the images as far as I can tell, but they would of course love to avoid it. I think it was PacBio that discussed a bitwise compressed format for their "traces" which reduces all the information to 48 bytes per base.
They're storing the images?! Why? That's a serious amount space to assign just for archive. AFAIK this wasn't done routinely for ABI sequencing, so why do it for HTS? Is it a justifiable expense in case someone would wish to re-analyse them?

Quote:
Overall, as a researcher in bacterial pathogenesis, I was a bit disappointed that there wasn't more prokaryotic work presented, or more de novo assembly information presented.
At a recent workshop I attended this was a common query and according to some accounts current de novo software can't cope with the depth of coverage generated by Solexa et al...
chris is offline   Reply With Quote
Old 04-28-2008, 03:45 AM   #16
ScottC
Senior Member
 
Location: Monash University, Melbourne, Australia.

Join Date: Jan 2008
Posts: 246
Default

Quote:
Originally Posted by chris View Post
They're storing the images?! Why? That's a serious amount space to assign just for archive. AFAIK this wasn't done routinely for ABI sequencing, so why do it for HTS? Is it a justifiable expense in case someone would wish to re-analyse them?
al...
I was considering what our data retention policies will be when we get our system fully up and running to capacity. Naturally, I tended to think that deleting the images was best just because they occupy so much space. Once you have the base calls, they're not much use... but then I heard a few talks regarding improvements to the Illumina software that does the channel deconvolution and how these improvements might lead to better base calling etc. Well if the images are gone, you'll never have the chance to get that improved data. But then, will anyone even want them reanalyzed. I guess it's a trade-off between storage cost and estimated future value. Maybe it's just easier to do the run again rather than reanalyse old images.

Quote:
Originally Posted by chris View Post
At a recent workshop I attended this was a common query and according to some accounts current de novo software can't cope with the depth of coverage generated by Solexa et al...
I guess that's true in some respects... but it depends on the software you use. I know a lot of 'old' software can't handle it, but there are new ones now that can (Velvet and all the rest). We're doing a bit of work on that ourselves. I think the problem is more the short reads than the depth of coverage. Well, getting back to my original gripe... I think the reason that there's so much human- and mammal-centric work going on (rather than my favourite - bacterial stuff) is that there's more money in it :-)

Last edited by ScottC; 04-28-2008 at 03:48 AM.
ScottC is offline   Reply With Quote
Old 04-28-2008, 04:01 AM   #17
chris
Member
 
Location: Dundee, Scotland

Join Date: Apr 2008
Posts: 52
Default

Hi Scott,

Quote:
Originally Posted by ScottC View Post
Once you have the base calls, they're not much use... but then I heard a few talks regarding improvements to the Illumina software that does the channel deconvolution and how these improvements might lead to better base calling etc. Well if the images are gone, you'll never have the chance to get that improved data. But then, will anyone even want them reanalyzed. I guess it's a trade-off between storage cost and estimated future value. Maybe it's just easier to do the run again rather than reanalyse old images.
That's my point. If it's going to n thousand $currency to store the images on the off-chance that someone may want to re-analyse the base calls it may just be cheaper to re-run the experiment - assuming you still have samples of course

What kind of improvements in the base-calls are we talking about and how much of a difference will it make to a final assembly?

Quote:
Well, getting back to my original gripe... I think the reason that there's so much human- and mammal-centric work going on (rather than my favourite - bacterial stuff) is that there's more money in it :-)
Well, that's always going to be the case isn't it. However, smaller genomes will benefit most from this type of data as there's a much greater chance of unique reads. Lower costs also mean better chance of getting funding for the sequencing.
chris is offline   Reply With Quote
Old 04-29-2008, 09:07 AM   #18
JohnBull
Junior Member
 
Location: cambridge

Join Date: Apr 2008
Posts: 1
Default eh?

Quote:
Originally Posted by ECO View Post
Something not often seen in molbio research space. The process supervisor really put together a nice story about their pipeline and how they handle and track seemingly "routine" processes on a scale that is like no other in the world (20+ Solexa GAII-PE machines!).
GAs

Sanger=28
BGI=19

!
JohnBull is offline   Reply With Quote
Old 04-29-2008, 09:28 AM   #19
ECO
--Site Admin--
 
Location: SF Bay Area, CA, USA

Join Date: Oct 2007
Posts: 1,355
Default

Ok ok..._almost_ like no other in the world.

Apologies to any sensitive Sangerites out there.

We'd love to hear about the LIMS and data management pipeline in use there too!
ECO is offline   Reply With Quote
Old 04-30-2008, 05:46 PM   #20
ScottC
Senior Member
 
Location: Monash University, Melbourne, Australia.

Join Date: Jan 2008
Posts: 246
Default

Quote:
Originally Posted by chris View Post
That's my point. If it's going to n thousand $currency to store the images on the off-chance that someone may want to re-analyse the base calls it may just be cheaper to re-run the experiment - assuming you still have samples of course

What kind of improvements in the base-calls are we talking about and how much of a difference will it make to a final assembly?

I'm not sure at this point, but I do know that there are a few packages on the horizon that will produce new base calling results. I guess we'll have to wait and see as to how good they are, and whether it's worth keeping all that data.

Cheers,
Scott.
ScottC is offline   Reply With Quote
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off




All times are GMT -8. The time now is 01:25 AM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2018, vBulletin Solutions, Inc.
Single Sign On provided by vBSSO