SEQanswers

Go Back   SEQanswers > Sequencing Technologies/Companies > Illumina/Solexa



Similar Threads
Thread Thread Starter Forum Replies Last Post
Library quantification suludana Illumina/Solexa 22 10-24-2013 03:52 PM
Illumina library quantification deepakpatilp Illumina/Solexa 75 06-24-2013 09:21 AM
quantification of library maddalena Introductions 0 09-07-2012 01:00 AM
Library Quantification Confusion! peromhc Sample Prep / Library Generation 9 10-05-2011 07:18 AM
3'UTR library or random primed cDNA library for quantification? Rosanne82 Sample Prep / Library Generation 0 06-26-2009 05:27 AM

Reply
 
Thread Tools
Old 06-20-2013, 06:15 AM   #1
axr624
Junior Member
 
Location: bham uk

Join Date: May 2013
Posts: 2
Default Library quantification quandary...

Hello! First time poster, although have lurked for a while.

I have looked through the site for advice on library quantification, and understand that ideally a combination of Qubit, Bioanalyser and qPCR should be used.

However, I am trying to use the MiSeq to generate reads out of transposons into neighbouring genome (TraDIS), and i'm not sure if the same constraints apply to my library prep in comparison to the majority of other applications.

Firstly, in TraDIS the number of reads is more important (to an extent) than the length of the reads. As such, is tightly defining the fragment length of my library as important as it is for other applications? Does having a wider range of fragment lengths on the flow cell affect cluster generation/PF/read quality? An important consideration here is that I would like to avoid any bias introduction through stringent size selection.

Secondly, assuming a wider fragment range in the library, would qPCR be the best option for quantification? Correlating Qubit concentrations and Bioanalyser traces becomes more problematic with a wider range library; would qPCR totally solve this issue?

As may be evident, I am still working through the library prep to finalise a standardised, reliable protocol. There are other aspects I need to look into, but I think it makes sense to iron out the basis of experiment. Any and all opinions are appreciated, and apologies for the long post!
axr624 is offline   Reply With Quote
Old 06-28-2013, 01:48 AM   #2
axr624
Junior Member
 
Location: bham uk

Join Date: May 2013
Posts: 2
Default

As a follow up, I have used a non size selected library in two runs. The first run had 339k/mm2, 85% PF, 5.3m reads. The second, with twice the amount of library, achieved 414k/mm2, 84% PF, 6.2m reads. All other conditions were equal between runs...
axr624 is offline   Reply With Quote
Old 06-28-2013, 02:48 AM   #3
TonyBrooks
Senior Member
 
Location: London

Join Date: Jun 2009
Posts: 298
Default

You would still need to know what size your library is in order to correctly quantify.
Using the Qubit only measures the mass of dsDNA in the library. It tells you nothing about the number of DNA molecules (molarity) which is needed for sequencing. For a fixed amount of DNA, short DNA molecules will have a higher molarity than longer DNA molecules as their molecular weight will be smaller.
Similarly, if you qPCR you'll need to normalise the fluorescence values based on fragment length (assuming you're using a SYBR assay). Longer molecules = more SYBR fluorescence.
The problem comes in picking a correct bp value for these normalisations. Ideally, you'd like a nice tight library with not much size variation and no adapter dimer but this isn't always possible.
TonyBrooks is offline   Reply With Quote
Old 07-18-2013, 11:32 AM   #4
rthornton4
Junior Member
 
Location: Houston, Texas

Join Date: Sep 2010
Posts: 6
Default

Interesting point TonyBrooks!! We were just debating this earlier in our lab. We are a Core facility that offers Illumina NGS. We will sequence 'user-prepped' libraries once they have been through our QC process which includes picogreen, BioA trace and KAPA qPCR. I just completed analysis on 12 user-prepped Agilent HaloPlex libraries. The library fragments span a range from 180bp to around 600 bp and it's not a smooth curve on the trace either. With this broad span of fragment sizes would you simply look at the 'Region Table' tab on the BioA trace and take the average? It does look like the average is calculated correctly but how would you normalize for the broad range of library sizes to determine the nanomolar concentration? It's an interesting question and I would love to hear opinions on this. Thanks!!
rthornton4 is offline   Reply With Quote
Old 07-18-2013, 12:38 PM   #5
SNPsaurus
Registered Vendor
 
Location: Eugene, OR

Join Date: May 2013
Posts: 521
Default

With a broad range of size, taking the average length will give problems. Say you have a distribution of insert sizes from 100-500. The average is 300. But it is not the case that the smaller fragments and larger fragments will balance out. There are 3x more fragments at 100bp compared to 300, while only 3/5s fewer fragments at 500bp compared to 300. Something like the geometric mean would be more appropriate (224 bp). Of course, having spikes and non-normal distributions adds complications!

Some fragment length analyzers do give regional molarity or calculate more accurately the molarity of a series of peaks.
__________________
Providing nextRAD genotyping and PacBio sequencing services. http://snpsaurus.com
SNPsaurus is offline   Reply With Quote
Reply

Tags
library, miseq, qpcr, quantification

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off




All times are GMT -8. The time now is 05:56 AM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2020, vBulletin Solutions, Inc.
Single Sign On provided by vBSSO