![]() |
|
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
Ion Proton beta user feedback? | nhunkapiller | Ion Torrent | 9 | 04-20-2013 12:44 AM |
Buy Ion Proton for core facility or not? | flxlex | Core Facilities | 18 | 04-02-2013 08:23 PM |
Library prep kits for Ion PGM or Ion Proton | Bioo Scientific | Vendor Forum | 0 | 02-18-2013 07:25 AM |
Ion Torrent libraries on proton? | Apex | Ion Torrent | 2 | 12-12-2012 07:33 AM |
Ion Torrent $1000 Genome!? Benchtop Ion Proton Sequencer | aeonsim | Ion Torrent | 88 | 10-28-2012 05:50 AM |
![]() |
|
Thread Tools |
![]() |
#1 |
Senior Member
Location: Wales Join Date: May 2008
Posts: 114
|
![]()
Hello All, we've only just done our first Proton run and the per base quality of the FastQC has ~90% of our data in the Q20 to Q28 range, should I be able to get up into the 30s or is it normal to be in the 20s with the Proton?
JPC |
![]() |
![]() |
![]() |
#2 |
Junior Member
Location: Beijing Join Date: Mar 2012
Posts: 8
|
![]()
I think your quality is already very good for proton
|
![]() |
![]() |
![]() |
#3 |
Senior Member
Location: Wales Join Date: May 2008
Posts: 114
|
![]()
In case anyone else is interested I downloaded some exome data from Ion Community and converted thee BAM files to put them into fastQC, the results are very similar to what we are seeing with Q20 to 30 being the range in which the vast majority of the data lies.
We're doing some recalibration in house and seeing some Q30+ data but not much. |
![]() |
![]() |
![]() |
#4 |
Member
Location: Rockville, MD Join Date: Apr 2011
Posts: 23
|
![]()
Also - keep in mind that Q30 Ion data != Q30 Illumina data since the base calling algorithms are different and the estimation/prediction of "quality" is inherently different between the base calling algorithms on each platform.
|
![]() |
![]() |
![]() |
#5 |
Sequenizer
Location: Singapore Join Date: Sep 2010
Posts: 27
|
![]()
Could you elaborate on how the Q30 (Ion) is != Q30 (Illumina)? I thought it is a set value showing an accuracy for base calling of 99.9% being correct (hence one error in 1000 bases).
|
![]() |
![]() |
![]() |
#6 | |
David Eccles (gringer)
Location: Wellington, New Zealand Join Date: May 2011
Posts: 838
|
![]() Quote:
[please don't consider that an exhaustive list... that's just what came to my mind in a couple of minutes] Different systems will have different biases due to the different technology used in the system, as well as different assumptions made by the people writing the code to calculate error probability. All that can really be done is to make some guesses about what parameters are the most important, test a few models based on those parameters in real-world scenarios, choose the best model(s), and hope that your models fit the most common use cases for analysis. In short, it would be silly for Illumina and Life Technologies to use the same method for calculating sequencing error, because of the substantially different technology behind the machines. Last edited by gringer; 11-04-2013 at 02:51 AM. |
|
![]() |
![]() |
![]() |
#7 |
Sequenizer
Location: Singapore Join Date: Sep 2010
Posts: 27
|
![]()
In that case how can you compare the Qscores at all? Clearly, it makes a huge difference if you look at a Illumina Q30 run, a Proton Q17 run or a PacBio <Q10 run.
Just by looking at the data in a browser you can see many more artifacts the lower the QScore but i agree, it is difficult to compare all these apples and oranges and bananas, as with changing read length, PE vs SR, long reads from Pacio the final assembly/alignment will show you stats which might look comparable at first sight but once you drill down deeper it raises many questions... |
![]() |
![]() |
![]() |
#8 | |
David Eccles (gringer)
Location: Wellington, New Zealand Join Date: May 2011
Posts: 838
|
![]() Quote:
The generation of Q scores is difficult (think P vs NP if you're a maths person), and that leads to data-specific biases that in most cases cannot be predicted prior to carrying out a run on a new sample. |
|
![]() |
![]() |
![]() |
#9 |
Junior Member
Location: Ann Arbor, MI Join Date: Feb 2013
Posts: 2
|
![]()
Actually, generating qscores is usually done by applying a calibrated statistical model that uses quality predictors that are vendor and platform specific. It's essentially a machine learning exercise.
Every calibration is imperfect, certainly, but you don't need to know everything about all the error modes and quirks of a platform to get a reasonably good quality model. The difficulties in getting good calibrations (and hence reliable quality scores) are more practical in nature than theoretical. |
![]() |
![]() |
![]() |
Tags |
per base quality, proton, q20, q30, quality |
Thread Tools | |
|
|