View Single Post
Old 11-30-2016, 06:41 AM   #2
r.rosati
Member
 
Location: Brazil

Join Date: Aug 2015
Posts: 69
Default

Hello!

About the chip:
I wouldn't advise reusing the initialization chip several times. Also, I agree with the official protocol and I wouldn't use the ultrapure water cleaning chip for initialization. If you think about it, even if the cleaning procedure ends with drying the tubes, the chip will get in contact with some chlorite. After all, if the system didn't still have some chlorite solution in it, you wouldn't need a second cleaning with 18 MOhm water afterwards. So I'd say that at least part of your problems might be due to this.
If you happen to use the instrument very rarely, what I would suggest (it empirically works for us) is to try a long-term storage of the chip from the last initialization. What we do is we pass three times 100 ul of 50% annealing buffer, then 2x flush buffer, then 2x isopropanol, we dry it by vacuum aspiration and we store it dry. Then just before initialization, we re-hydrate it by doing the opposite procedure.

About the control fragments:
What kit are you using for template preparation? I don't see the TF_1 control fragment in the report, which is included in templating kits. It should be shown there, I think.
The TF_C data isn't good. This could be either due to a bad run, or to the control ISPs themselves being old, perhaps? Are you using any "very" expired reagents? If you think the TF_C particles are just bad themselves, then the run could still be ok. If not
Also, you have lots of TF_C sequences there. We use HiQ kits and commonly get around 400k-600k counts for this fragment. We had a bad transcriptome run with 8% loading once, and we got around 2 million TF_C sequences. So I guess you'd get 1.3 million reads for TF_C with a "good run" when the number of ISPs was barely above the minimum necessary to completely fill the chip.
In this sense, you might be dealing with a "barely enough vs. not enough" ISPs scenario for your good and bad runs.

About the consensus key:
although the "good run" has key peaks at about ~100, the bad ones are around 30. This isn't good. If fragmentation and templating went properly, you should see key peaks at around 100 (or more, say ~130, if your fragments are short i.e. ~100bp or below).
So I would say that this is a second hint that templating didn't go well - the first being the low number of loaded wells.

Polyclonal ISPs are in the usual range or below, so you're not adding too much library to the template reaction, but perhaps you're adding a bit less than usual. In my experience polyclonals only drop sharply when there's really not enough library DNA for a good run.

I must close now, but all in all I'd think that your "main" issue is before the run, but your run isn't perfect either. If I was in your situation, I'd check if fragmentation is giving DNA of an appropriate size distribution, since big fragments might be templated poorly. I'd also check quantifications. And I'd check why the TF_1 fragment isn't in the report.
If you want to actually check the size distribution of your reads, you can re-analyze the run while removing quality trimming.
(that'd be by adding:
--trim-qual-cutoff 100
to the basecaller options. Adding Scott Herke's image here because the IC is being discontinued soon).
And I'd totally try to use the chip from the last sequencing for initialization. Don't use the cleaning chip.
Good luck!
r.rosati is offline   Reply With Quote