SEQanswers

SEQanswers (http://seqanswers.com/forums/index.php)
-   Ion Torrent (http://seqanswers.com/forums/forumdisplay.php?f=40)
-   -   Ion Proton Q20 (http://seqanswers.com/forums/showthread.php?t=33555)

JPC 09-10-2013 01:41 AM

Ion Proton Q20
 
Hello All, we've only just done our first Proton run and the per base quality of the FastQC has ~90% of our data in the Q20 to Q28 range, should I be able to get up into the 30s or is it normal to be in the 20s with the Proton?

JPC

eulbra 09-11-2013 05:38 PM

I think your quality is already very good for proton

JPC 09-13-2013 08:17 AM

In case anyone else is interested I downloaded some exome data from Ion Community and converted thee BAM files to put them into fastQC, the results are very similar to what we are seeing with Q20 to 30 being the range in which the vast majority of the data lies.

We're doing some recalibration in house and seeing some Q30+ data but not much.

jonathanjacobs 09-24-2013 09:54 AM

Also - keep in mind that Q30 Ion data != Q30 Illumina data since the base calling algorithms are different and the estimation/prediction of "quality" is inherently different between the base calling algorithms on each platform.

Buzz0r 11-04-2013 01:49 AM

Could you elaborate on how the Q30 (Ion) is != Q30 (Illumina)? I thought it is a set value showing an accuracy for base calling of 99.9% being correct (hence one error in 1000 bases).

gringer 11-04-2013 02:48 AM

Quote:

Originally Posted by Buzz0r (Post 120679)
Could you elaborate on how the Q30 (Ion) is != Q30 (Illumina)? I thought it is a set value showing an accuracy for base calling of 99.9% being correct (hence one error in 1000 bases).

The cause of errors is unpredictable, so it is essentially impossible to work out the precise accuracy of a given sequenced base. There are a number of things that can contribute to error; some can be measured, some can not:
  • polymerase read errors
  • wobble base pairing
  • incorrect primer binding
  • image focus
  • ion calibration
  • fluorophore overlap / bleed through
  • phasing (stochastic chemistry for base addition)
  • inadequate flow
  • irregular flow
  • electrical noise
  • optical noise
  • bubbles
  • sample contamination
  • sample degredation

[please don't consider that an exhaustive list... that's just what came to my mind in a couple of minutes]

Different systems will have different biases due to the different technology used in the system, as well as different assumptions made by the people writing the code to calculate error probability. All that can really be done is to make some guesses about what parameters are the most important, test a few models based on those parameters in real-world scenarios, choose the best model(s), and hope that your models fit the most common use cases for analysis.

In short, it would be silly for Illumina and Life Technologies to use the same method for calculating sequencing error, because of the substantially different technology behind the machines.

Buzz0r 11-05-2013 01:39 AM

In that case how can you compare the Qscores at all? Clearly, it makes a huge difference if you look at a Illumina Q30 run, a Proton Q17 run or a PacBio <Q10 run.

Just by looking at the data in a browser you can see many more artifacts the lower the QScore but i agree, it is difficult to compare all these apples and oranges and bananas, as with changing read length, PE vs SR, long reads from Pacio the final assembly/alignment will show you stats which might look comparable at first sight but once you drill down deeper it raises many questions...

gringer 11-05-2013 02:32 AM

Quote:

Originally Posted by Buzz0r (Post 120775)
In that case how can you compare the Qscores at all? Clearly, it makes a huge difference if you look at a Illumina Q30 run, a Proton Q17 run or a PacBio <Q10 run.

Evaluating (and comparing) the correctness of Q scores for a well-known dataset is fairly easy -- PhiX is a common one, but everyone has their own pet. All you need to do is to compare the quality values with what is expected ("This is a 17-generation inbred strain with SNPs here, here and here, and a gene translocation from here to here. If the sequence is showing a SNP here, then that's probably an incorrect base").

The generation of Q scores is difficult (think P vs NP if you're a maths person), and that leads to data-specific biases that in most cases cannot be predicted prior to carrying out a run on a new sample.

tobias.mann 11-05-2013 07:39 AM

Actually, generating qscores is usually done by applying a calibrated statistical model that uses quality predictors that are vendor and platform specific. It's essentially a machine learning exercise.

Every calibration is imperfect, certainly, but you don't need to know everything about all the error modes and quirks of a platform to get a reasonably good quality model. The difficulties in getting good calibrations (and hence reliable quality scores) are more practical in nature than theoretical.


All times are GMT -8. The time now is 03:47 PM.

Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2020, vBulletin Solutions, Inc.