Go Back   SEQanswers > Core Facilities

Similar Threads
Thread Thread Starter Forum Replies Last Post
cuffdiff: tracking FPKM of a gene in EACH sample mrfox Bioinformatics 2 05-26-2015 09:45 AM
Genomic DNA sample preparation Smriti Sample Prep / Library Generation 2 05-25-2012 02:09 PM
Assembling a mixed DNA sample rubi Illumina/Solexa 3 11-24-2011 03:53 PM
Sample/library prep of DNA and RNA in a metagenomic sample chrisaw01 Metagenomics 1 05-05-2011 02:59 PM
Bacterial genomic DNA sample mazinga Sample Prep / Library Generation 9 01-31-2011 10:47 AM

Thread Tools
Old 04-04-2013, 08:55 AM   #1
Location: England

Join Date: Jan 2013
Posts: 16
Default DNA Sample tracking

Hi all,

a not particularly intensive question here. We recently had a couple of sample identities mixed by our exome sequencing provider (i.e. we got back all our data but some of the .fastq files were misidentified). This is obviously a huge issue.

I was wondering what sort of approaches were taken in other core labs in order to prevent, or at least catch this sort of error? If we're supposedly moving towards clinical sequencing as a standard this really needs not to happen.

Thanks for any suggestions.
rjohnp is offline   Reply With Quote
Old 04-04-2013, 02:06 PM   #2
Senior Member
Location: UK

Join Date: Jan 2010
Posts: 390

I like the fact you assume it was your sequencing provider that mixed up the sequences, and not your lab that mixed up the samples

I'm only joking, because I'm sure you've gone back and genotyped what you've sent and had your provider genotype the aliquots you sent them but I've had to deal with many situations where we have been sent samples (or data) that is not what the originator claims, at any point in the process where a human is involved there exists the possibility for sample mix up.

I think the only answer from the lab side is automation. You could have a genotyping panel run on samples as they come in, and checked against e.g. exome results on the way out. It would be great to have some kind of sample barcoding that is applied before capture and library prep is carried out that can be assayed later (I'm a bioinformatician, not running the machines, so no idea how feasible this is).

Far too often sample mix ups only present on analysis. My favourite, as it is the easiest to pick up and rectify is the trio's where a parent has been swapped for a child.
Bukowski is offline   Reply With Quote
Old 04-05-2013, 04:44 AM   #3
Location: England

Join Date: Jan 2013
Posts: 16

A valid point of course, but yes, we did check that all.

Very interesting actually, there are some good articles out there on lab automation, mainly for clinical path labs, some showing that error rates actually go up following automation, at least for a couple of months. The hardware is only as good as the programmer and the user, garbage in, garbage out and all that.

I guess an issue with ligating an oligo tag onto all your fragmented DNA (I think that's what you're suggesting here) is that you're essentially reducing you're coverage as you will just trim that sequence of on analysis, so say a 6% loss in coverage for a hexamer tag on 100 bp reads. And if you mix up the samples doing that you have no hope...
rjohnp is offline   Reply With Quote

error detection, identification, sample preparation

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

All times are GMT -8. The time now is 02:53 AM.

Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2021, vBulletin Solutions, Inc.
Single Sign On provided by vBSSO