![]() |
Ion Proton Q20
Hello All, we've only just done our first Proton run and the per base quality of the FastQC has ~90% of our data in the Q20 to Q28 range, should I be able to get up into the 30s or is it normal to be in the 20s with the Proton?
JPC |
I think your quality is already very good for proton
|
In case anyone else is interested I downloaded some exome data from Ion Community and converted thee BAM files to put them into fastQC, the results are very similar to what we are seeing with Q20 to 30 being the range in which the vast majority of the data lies.
We're doing some recalibration in house and seeing some Q30+ data but not much. |
Also - keep in mind that Q30 Ion data != Q30 Illumina data since the base calling algorithms are different and the estimation/prediction of "quality" is inherently different between the base calling algorithms on each platform.
|
Could you elaborate on how the Q30 (Ion) is != Q30 (Illumina)? I thought it is a set value showing an accuracy for base calling of 99.9% being correct (hence one error in 1000 bases).
|
Quote:
[please don't consider that an exhaustive list... that's just what came to my mind in a couple of minutes] Different systems will have different biases due to the different technology used in the system, as well as different assumptions made by the people writing the code to calculate error probability. All that can really be done is to make some guesses about what parameters are the most important, test a few models based on those parameters in real-world scenarios, choose the best model(s), and hope that your models fit the most common use cases for analysis. In short, it would be silly for Illumina and Life Technologies to use the same method for calculating sequencing error, because of the substantially different technology behind the machines. |
In that case how can you compare the Qscores at all? Clearly, it makes a huge difference if you look at a Illumina Q30 run, a Proton Q17 run or a PacBio <Q10 run.
Just by looking at the data in a browser you can see many more artifacts the lower the QScore but i agree, it is difficult to compare all these apples and oranges and bananas, as with changing read length, PE vs SR, long reads from Pacio the final assembly/alignment will show you stats which might look comparable at first sight but once you drill down deeper it raises many questions... |
Quote:
The generation of Q scores is difficult (think P vs NP if you're a maths person), and that leads to data-specific biases that in most cases cannot be predicted prior to carrying out a run on a new sample. |
Actually, generating qscores is usually done by applying a calibrated statistical model that uses quality predictors that are vendor and platform specific. It's essentially a machine learning exercise.
Every calibration is imperfect, certainly, but you don't need to know everything about all the error modes and quirks of a platform to get a reasonably good quality model. The difficulties in getting good calibrations (and hence reliable quality scores) are more practical in nature than theoretical. |
All times are GMT -8. The time now is 10:28 PM. |
Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2021, vBulletin Solutions, Inc.