Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Poor Ion PI Chip loading - can it possibly be caused by some contamination?

    Hello everyone,
    I have been working with Ion Torrent Proton sequencer for almost two years now. I do an RNA-Seq pipeline: prepare cDNA libraries (Ion Total RNA-Seq Kit v2), then perform emulsion PCR on One Touch 2 machine, enrich ISPs on One Touch ES machine, then load ISPs on a chip (Ion PI Sequencing 200 Kit v3, Ion PI Chip v2). Every time I do it I perform all these steps identically (at least, I believe so), but some runs (like 20% of all runs) inexplicably fail. Usually it's poor chip loading (less then 20% wells), though I can achieve 80-90% loading in other runs. One or two times loading was mediocre, like 30%, but majority of all the reads were "low quality" reads.
    I stick to the protocol quite strictly, the only major deviations are about cleaning chips (it's kinda bad lab tradition). I routinely use cleaning chips for initialization (but only the chips designed for MQ water cleaning, not chlorite cleaning chips); the protocol says explicitly that you shouldn't do it, but doesn't elaborate why. I also sometimes use the same MQ cleaning chip and the same chlorite cleaning chip for months while it's said you should replace them in a week (it's because we usually run the sequencer once a month or even more rarely and I just don't have a used chip from the previous week). However, I perform the full chlorite cleaning almost every time before initialization (this, too, due to the rare use of sequencer), and I've never seen any initialization errors like pH out of range.

    I deduced only three factors, that I'm aware of before I start the sequencing run, that can spoil chip loading density. First, it's foam, if the bubbles are too big and you inject too much air, you will lose some ISPs. Second, if despite all precautions there was some electrostatic discharge while picking up the chip, the loading will worsen and you will see distinct rectangular patterns on the loading heatmap. Third, once my libraries were too long (peak lenght 300+ bp instead of 200+), and I believe that either ISPs didn't fit into wells or elongation time wasn't enough during emulsion PCR. Still, either of these factors don't reduce the loading any worse then to 60%, and occasional 10% disaster can't be explained by them.

    In my sequence runs, Test fragments Percent 50AQ17 usually is strangely increased in bad runs: it's 80-90% in bad runs and only about 50-60% in good runs (it's supposed to be >90% if everything's OK, isn't it?).

    Is it possible that abusing of cleaning chips may result in some contamination and somehow lead to poor loading? Or there are some other factors I should take into account? I've read lots of troubleshooting items and haven't found an answer.

    I attach three reports, examples of "good" and "bad" runs here.

    If anyone has some bad reports and you know for sure what exactly caused the fail, please, show them to me too.
    Attached Files
    Last edited by lucorum; 11-29-2016, 06:24 AM.

  • #2
    Hello!

    About the chip:
    I wouldn't advise reusing the initialization chip several times. Also, I agree with the official protocol and I wouldn't use the ultrapure water cleaning chip for initialization. If you think about it, even if the cleaning procedure ends with drying the tubes, the chip will get in contact with some chlorite. After all, if the system didn't still have some chlorite solution in it, you wouldn't need a second cleaning with 18 MOhm water afterwards. So I'd say that at least part of your problems might be due to this.
    If you happen to use the instrument very rarely, what I would suggest (it empirically works for us) is to try a long-term storage of the chip from the last initialization. What we do is we pass three times 100 ul of 50% annealing buffer, then 2x flush buffer, then 2x isopropanol, we dry it by vacuum aspiration and we store it dry. Then just before initialization, we re-hydrate it by doing the opposite procedure.

    About the control fragments:
    What kit are you using for template preparation? I don't see the TF_1 control fragment in the report, which is included in templating kits. It should be shown there, I think.
    The TF_C data isn't good. This could be either due to a bad run, or to the control ISPs themselves being old, perhaps? Are you using any "very" expired reagents? If you think the TF_C particles are just bad themselves, then the run could still be ok. If not
    Also, you have lots of TF_C sequences there. We use HiQ kits and commonly get around 400k-600k counts for this fragment. We had a bad transcriptome run with 8% loading once, and we got around 2 million TF_C sequences. So I guess you'd get 1.3 million reads for TF_C with a "good run" when the number of ISPs was barely above the minimum necessary to completely fill the chip.
    In this sense, you might be dealing with a "barely enough vs. not enough" ISPs scenario for your good and bad runs.

    About the consensus key:
    although the "good run" has key peaks at about ~100, the bad ones are around 30. This isn't good. If fragmentation and templating went properly, you should see key peaks at around 100 (or more, say ~130, if your fragments are short i.e. ~100bp or below).
    So I would say that this is a second hint that templating didn't go well - the first being the low number of loaded wells.

    Polyclonal ISPs are in the usual range or below, so you're not adding too much library to the template reaction, but perhaps you're adding a bit less than usual. In my experience polyclonals only drop sharply when there's really not enough library DNA for a good run.

    I must close now, but all in all I'd think that your "main" issue is before the run, but your run isn't perfect either. If I was in your situation, I'd check if fragmentation is giving DNA of an appropriate size distribution, since big fragments might be templated poorly. I'd also check quantifications. And I'd check why the TF_1 fragment isn't in the report.
    If you want to actually check the size distribution of your reads, you can re-analyze the run while removing quality trimming.
    (that'd be by adding:
    --trim-qual-cutoff 100
    to the basecaller options. Adding Scott Herke's image here because the IC is being discontinued soon).
    And I'd totally try to use the chip from the last sequencing for initialization. Don't use the cleaning chip.
    Good luck!

    Comment


    • #3
      r.rosati, thanks a lot!

      I should try your method of storing used chips ("the opposite procedure" here means 2x isopropanol -> 2x flush buffer-> 3x 100 ul of 50% annealing buffer, am I right?) Do you use such chips for initialization several times in a row, or only once?
      There's almost never "a used chip from a run performed less than 2 days ago" in our lab, and I struggle a lot with the idea to just use a pristine new chip (once an error in preparing of pooled libraries was found out just before chip loading, so I prefer to open a new chip as late as possible). So you suppose that the main source of contamination from cleaning chip would be chlorite buffer traces, and chlorite destroys DNA? (I have another idea here, dubious, but still: what if I will perform the post-chlorite MQ cleaning not once, but twice and still use this MQ chip for initialization?)

      My template preparation kit (if I understood the question properly) is Ion PI OT2 200 Kit v3 that I use for emulsion PCR and after this for enrichment of ISPs. There is no specific mentions in this kit's protocol about adding control ISPs to the sample, so I guess they must be already mixed with empty ISPs in the stock tube?
      Before dilution and pooling of the libraries, I always check the lenght distribution of the libraries using Agilent 2100 Bioanalyzer, and usually it's a Gauss distribution with a peak about 200-280 bp. It corresponds to read lenght about 100-150 bp (after primers' and adapters' trimming).

      Control ISPs that I add specifically before sequencing primer annealing (I guess they are TF_C) are indeed expired like most other reagents we have, I'm not sure about "very" (their expire date was 1-1.5 years ago). I wouldn't discuss why, that's just what we've got. I already asked our bioinformaticians about TF reading quality, but they couldn't interpret this, they just said that there's other methods to check the sample reading quality and these methods confirm that quality is OK.

      As regards possible problems before the run: first, I often obtain less then necessary total RNA to start with (it's extracted from specific sources, like human leukocytes or single snail neuron, and it's impossible to get enough), but I guess that worst-case scenario here is decreased concentration of the libraries, and I dilute libraries manyfold anyway. I also may lose some rare transcripts, but I can't lose the whole library, can I?
      Second, I am sometimes suspicious about the OT2 machine's performance, I even tried to export the run logs from OT2, but they didn't record to my flash drive. I consider to order Ion Sphere Quality Control Kit for Qubit, but this kit only measures template adapter/ISP adapter ratio and helps you to tell if there was no amplification, but it can't estimate possible contamination or fragmentation of amplified DNA. Are there any other ways to control OT2 function?

      Comment


      • #4
        Originally posted by lucorum View Post
        I should try your method of storing used chips ("the opposite procedure" here means 2x isopropanol -> 2x flush buffer-> 3x 100 ul of 50% annealing buffer, am I right?) Do you use such chips for initialization several times in a row, or only once?
        Yes correct, that's the method.
        I use them only once, and then substitute them with the chip I've just ran.
        We have the same problem, very rarely having two initializations in the same week. Chips prepared like this, and stored protected from light and humidity, should work for at least a couple if months (at least they do in our hands).

        Oh you're right I'm sorry, the OT2 200 v3 kit did not contain TF_1 control fragments. So you should only see TF_C fragments, just as you do.

        We also periodically use expired kits. That's how funding goes sometimes. Let's assume then that the run is not the main problem.

        Originally posted by lucorum View Post
        Second, I am sometimes suspicious about the OT2 machine's performance, I even tried to export the run logs from OT2, but they didn't record to my flash drive.
        Oh I know why, it's a simple reason and there are 2 workarounds.
        The reason - I've passed this info on to our local Thermo engineer last year - is that the OT2's operating system is loaded as read-only; that's why you lose all the log data when you turn it off, and also why you can just power it off via a switch. But this also implies that the system cannot create a directory under /media where to mount the USB drive to.
        There are two workarounds.
        The first one - as I'm told - is that the Torrent Server that's linked to the OT2 will automatically download and store the last 10 logs. They should be under
        /results/OT/
        and you can access them through any computer that can access your Torrent Server, via WinSCP (on Windows) or the like.
        The second one is to solve the issue by loading the OS as read/write, permanently creating the folder that the OT2 needs to mount the USB drive, and then locking it again as read-only. This however would possibly invalid your guarantee. I asked our local Thermo engineer to do it for us, after he agreed that the procedure was OK.

        Comment


        • #5
          bad run with low quality - this looks like not enough library into the template prep reaction. The test fragments are very high which means you did not have many library ISP's during loading.

          I would not read too much into low quality for runs with low loading. it is always high for runs with low loading.

          Comment


          • #6
            Thank you!

            Originally posted by snetmcom View Post
            I would not read too much into low quality for runs with low loading. it is always high for runs with low loading.
            I'm sorry, I didn't understand what you said here. You mean that if template preparation is OK but the loading itself is bad (due to bubbles, for example), I should expect little percent of "low quality" reads? And if I have bad loading AND low quality, it means problems not only with loading but with template preparation too, and I shouldn't process these results? (We didn't, we repeated template preparation and sequencing one more time).

            Thermo Fisher also recommends to rinse an "old, used chip" (they don't say how old it can be) with 100 μL of isopropanol and then 100 μL of water to prepare it for initialization. Did somebody try this?

            Comment


            • #7
              Oh, P.S. about the OneTouch logs: one second reason for it failing to export logs is if the USB drive is formatted as NTFS filesystem. The OT2's OS can't handle NTFS; it has to be FAT- or ext4-formatted.

              Comment


              • #8
                r.rosati, about OT2:
                I tried to download the logs right after the run, so the instrument was continuosly turned on and shouldn't lose data. I formatted two different flash drives in FAT32 (FAT32 is a default file system for these drives, and it wasn't specified in the protocol what file system it must be) and tried to write logs on each of the drives at least twice, and failed. I will try FAT next time, thank you for advice.
                Our OT2 instrument isn't connected to the server and I don't know how to connect it (I think it should be quite simple, though).

                I also have another (stupid) question about used chips: we recently purchased some new, v3 chips, and still didn't use one, all our used chips are v2. Would it be OK to use old v2 chips for cleaning and initialization and then v3 chip for sequencing?

                I really appreciate your support, thank you!

                Comment


                • #9
                  Hello!

                  Originally posted by lucorum View Post
                  r.rosati, about OT2:
                  I tried to download the logs right after the run, so the instrument was continuosly turned on and shouldn't lose data. I formatted two different flash drives in FAT32 (FAT32 is a default file system for these drives, and it wasn't specified in the protocol what file system it must be) and tried to write logs on each of the drives at least twice, and failed. I will try FAT next time, thank you for advice.
                  FAT32 is OK, so unfortunately this is not the problem.

                  Originally posted by lucorum View Post
                  Our OT2 instrument isn't connected to the server and I don't know how to connect it (I think it should be quite simple, though).
                  Oh. Are you updating it via USB? So does your OneTouch2 recognize the USB for installling updates?


                  Originally posted by lucorum View Post
                  I also have another (stupid) question about used chips: we recently purchased some new, v3 chips, and still didn't use one, all our used chips are v2. Would it be OK to use old v2 chips for cleaning and initialization and then v3 chip for sequencing?
                  Yes you can still use them for cleaning; about initialization, yes, but it depends on how old they are.

                  Notice that v2 and v3 chips are "indistinguishable" by our Ion Proton (they're both identified as the same version); the difference between them is that the new v3 one is prepared (but expires), the other needs prep (but doesn't expire).

                  So you can also still use new v2 chips for sequencing. This is how I do it, but mind, the "usual" prep might perhaps be enough:

                  Originally posted by r.rosati (on IonCommunity)
                  We used a simple system to ease the preparation process. We used a "third hand" grip (as for electronics work) to keep a vacuum line in place within the exit well of the chip, near but not above the exit port (to prevent aspiration of liquid from inside the chip). This streamlines the washing procedure and allows for using 1ml syringes to clean the chip.

                  So basically the difference is that whenever the protocol called for injecting 200 ul, or 2x 200 ul of isopropanol/water/NaOH, we would just inject a whole 1ml.

                  The preparation protocol was thus:

                  1) Three cycles of:

                  - inject 1ml isopropanol

                  - vacuum dry

                  - inject 100 ul of chip preparation solution, remove the excess

                  - incubate 2 mins, 50°C

                  - inject 1ml isopropanol

                  - inject 1ml water

                  - inject 1ml NaOH

                  - Incubate 1 min, RT

                  - inject 1ml water

                  (so it's basically the same back-and-forth of: chip prep, iPrOH, water, NaOH, water, iPrOH, chip prep - just with 1ml and automatic vacuuming of the solutions at the exit well.)



                  2) Inject 1 ml isopropanol

                  3) wash the loading well with 100 ul isopropanol

                  4) vacuum-dry

                  5) store at RT, in a dry container away from light, until the next day


                  The chips were used the next day, directly as stored. The ISPs are added to the dry chip, and then we followed the Hi-Q protocol by the book. This means that there is no need to calibrate the chip before the run, because empirically, the pre-run calibration of the Hi-Q program is equivalent to the pre-run calibration step in the v2 chip protocol.
                  Last edited by r.rosati; 12-02-2016, 06:54 AM.

                  Comment


                  • #10
                    Thanks!
                    I'm not sure how exactly we update OT2 soft. However, it has Hi-Q kit in its menu, so its software can't be really old, and an update was performed at least once. I should ask my colleagues about this.

                    Originally posted by r.rosati View Post

                    Yes you can still use them for cleaning; about initialization, yes, but it depends on how old they are.

                    Notice that v2 and v3 chips are "indistinguishable" by our Ion Proton (they're both identified as the same version); the difference between them is that the new v3 one is prepared (but expires), the other needs prep (but doesn't expire).

                    So you can also still use new v2 chips for sequencing.
                    Oh, I'm glad to hear this, because it seems that we have an excess of v2 chips: there's a few chips left, but the reagents designed for them (Sequencing 200 Kit v3) are used up. It's good to know that v2 chips may still be used with the new reagents.

                    Unfortunately I can't log in to Ion Community (I've tried to register with my e-mail a few days ago, but they are still considering if I should have an account).
                    Last edited by lucorum; 12-07-2016, 06:52 AM.

                    Comment

                    Latest Articles

                    Collapse

                    • seqadmin
                      Strategies for Sequencing Challenging Samples
                      by seqadmin


                      Despite advancements in sequencing platforms and related sample preparation technologies, certain sample types continue to present significant challenges that can compromise sequencing results. Pedro Echave, Senior Manager of the Global Business Segment at Revvity, explained that the success of a sequencing experiment ultimately depends on the amount and integrity of the nucleic acid template (RNA or DNA) obtained from a sample. “The better the quality of the nucleic acid isolated...
                      03-22-2024, 06:39 AM
                    • seqadmin
                      Techniques and Challenges in Conservation Genomics
                      by seqadmin



                      The field of conservation genomics centers on applying genomics technologies in support of conservation efforts and the preservation of biodiversity. This article features interviews with two researchers who showcase their innovative work and highlight the current state and future of conservation genomics.

                      Avian Conservation
                      Matthew DeSaix, a recent doctoral graduate from Kristen Ruegg’s lab at The University of Colorado, shared that most of his research...
                      03-08-2024, 10:41 AM

                    ad_right_rmr

                    Collapse

                    News

                    Collapse

                    Topics Statistics Last Post
                    Started by seqadmin, Yesterday, 06:37 PM
                    0 responses
                    10 views
                    0 likes
                    Last Post seqadmin  
                    Started by seqadmin, Yesterday, 06:07 PM
                    0 responses
                    9 views
                    0 likes
                    Last Post seqadmin  
                    Started by seqadmin, 03-22-2024, 10:03 AM
                    0 responses
                    49 views
                    0 likes
                    Last Post seqadmin  
                    Started by seqadmin, 03-21-2024, 07:32 AM
                    0 responses
                    67 views
                    0 likes
                    Last Post seqadmin  
                    Working...
                    X