Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Ion Proton Q20

    Hello All, we've only just done our first Proton run and the per base quality of the FastQC has ~90% of our data in the Q20 to Q28 range, should I be able to get up into the 30s or is it normal to be in the 20s with the Proton?

    JPC

  • #2
    I think your quality is already very good for proton

    Comment


    • #3
      In case anyone else is interested I downloaded some exome data from Ion Community and converted thee BAM files to put them into fastQC, the results are very similar to what we are seeing with Q20 to 30 being the range in which the vast majority of the data lies.

      We're doing some recalibration in house and seeing some Q30+ data but not much.

      Comment


      • #4
        Also - keep in mind that Q30 Ion data != Q30 Illumina data since the base calling algorithms are different and the estimation/prediction of "quality" is inherently different between the base calling algorithms on each platform.
        @bioinformer
        http://www.linkedin.com/in/jonathanjacobs

        Comment


        • #5
          Could you elaborate on how the Q30 (Ion) is != Q30 (Illumina)? I thought it is a set value showing an accuracy for base calling of 99.9% being correct (hence one error in 1000 bases).

          Comment


          • #6
            Originally posted by Buzz0r View Post
            Could you elaborate on how the Q30 (Ion) is != Q30 (Illumina)? I thought it is a set value showing an accuracy for base calling of 99.9% being correct (hence one error in 1000 bases).
            The cause of errors is unpredictable, so it is essentially impossible to work out the precise accuracy of a given sequenced base. There are a number of things that can contribute to error; some can be measured, some can not:
            • polymerase read errors
            • wobble base pairing
            • incorrect primer binding
            • image focus
            • ion calibration
            • fluorophore overlap / bleed through
            • phasing (stochastic chemistry for base addition)
            • inadequate flow
            • irregular flow
            • electrical noise
            • optical noise
            • bubbles
            • sample contamination
            • sample degredation


            [please don't consider that an exhaustive list... that's just what came to my mind in a couple of minutes]

            Different systems will have different biases due to the different technology used in the system, as well as different assumptions made by the people writing the code to calculate error probability. All that can really be done is to make some guesses about what parameters are the most important, test a few models based on those parameters in real-world scenarios, choose the best model(s), and hope that your models fit the most common use cases for analysis.

            In short, it would be silly for Illumina and Life Technologies to use the same method for calculating sequencing error, because of the substantially different technology behind the machines.
            Last edited by gringer; 11-04-2013, 02:51 AM.

            Comment


            • #7
              In that case how can you compare the Qscores at all? Clearly, it makes a huge difference if you look at a Illumina Q30 run, a Proton Q17 run or a PacBio <Q10 run.

              Just by looking at the data in a browser you can see many more artifacts the lower the QScore but i agree, it is difficult to compare all these apples and oranges and bananas, as with changing read length, PE vs SR, long reads from Pacio the final assembly/alignment will show you stats which might look comparable at first sight but once you drill down deeper it raises many questions...

              Comment


              • #8
                Originally posted by Buzz0r View Post
                In that case how can you compare the Qscores at all? Clearly, it makes a huge difference if you look at a Illumina Q30 run, a Proton Q17 run or a PacBio <Q10 run.
                Evaluating (and comparing) the correctness of Q scores for a well-known dataset is fairly easy -- PhiX is a common one, but everyone has their own pet. All you need to do is to compare the quality values with what is expected ("This is a 17-generation inbred strain with SNPs here, here and here, and a gene translocation from here to here. If the sequence is showing a SNP here, then that's probably an incorrect base").

                The generation of Q scores is difficult (think P vs NP if you're a maths person), and that leads to data-specific biases that in most cases cannot be predicted prior to carrying out a run on a new sample.

                Comment


                • #9
                  Actually, generating qscores is usually done by applying a calibrated statistical model that uses quality predictors that are vendor and platform specific. It's essentially a machine learning exercise.

                  Every calibration is imperfect, certainly, but you don't need to know everything about all the error modes and quirks of a platform to get a reasonably good quality model. The difficulties in getting good calibrations (and hence reliable quality scores) are more practical in nature than theoretical.

                  Comment

                  Latest Articles

                  Collapse

                  • seqadmin
                    Strategies for Sequencing Challenging Samples
                    by seqadmin


                    Despite advancements in sequencing platforms and related sample preparation technologies, certain sample types continue to present significant challenges that can compromise sequencing results. Pedro Echave, Senior Manager of the Global Business Segment at Revvity, explained that the success of a sequencing experiment ultimately depends on the amount and integrity of the nucleic acid template (RNA or DNA) obtained from a sample. “The better the quality of the nucleic acid isolated...
                    03-22-2024, 06:39 AM
                  • seqadmin
                    Techniques and Challenges in Conservation Genomics
                    by seqadmin



                    The field of conservation genomics centers on applying genomics technologies in support of conservation efforts and the preservation of biodiversity. This article features interviews with two researchers who showcase their innovative work and highlight the current state and future of conservation genomics.

                    Avian Conservation
                    Matthew DeSaix, a recent doctoral graduate from Kristen Ruegg’s lab at The University of Colorado, shared that most of his research...
                    03-08-2024, 10:41 AM

                  ad_right_rmr

                  Collapse

                  News

                  Collapse

                  Topics Statistics Last Post
                  Started by seqadmin, Yesterday, 06:37 PM
                  0 responses
                  10 views
                  0 likes
                  Last Post seqadmin  
                  Started by seqadmin, Yesterday, 06:07 PM
                  0 responses
                  9 views
                  0 likes
                  Last Post seqadmin  
                  Started by seqadmin, 03-22-2024, 10:03 AM
                  0 responses
                  51 views
                  0 likes
                  Last Post seqadmin  
                  Started by seqadmin, 03-21-2024, 07:32 AM
                  0 responses
                  67 views
                  0 likes
                  Last Post seqadmin  
                  Working...
                  X