Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by MissDNA View Post

    I have also attached the bionalyzer trace, so you guys can have a look.
    Actually I can see low molecular weight peaks (100-200 bp) in these images. Maybe they don't look large, but they might be causing some mischief...

    --
    Phillip

    Comment


    • #32
      We used GS-FLX+ amplification protocol. We performed titration exactly according to the manual even using that weird 52 (or 56, I don´t recall now) cpb point. We got about 12% enrichment on the LV.

      I do see smaller peaks on that trace as well but I am not sure if they are real or some sort of artefact. Both Roche´s specialists agreed that the agilent traces look fine, yet they said there are some cases "unseen" fragments appear.

      I also don´t understand why they recommend such larger fragments for RL protocol. cDNA protocol did not change, although those libraries had already peaks around 900-1200 bp, which I remember I questioned Roche "why so large?" when that protocol came out. Considering XLR70 reads were only 1/2 de size.

      I talked to our FAS over the phone now, and I told him we need further investigation to know exacty what happened there and suggestions on how to improve prep. This run was a pilot for a much larger project and we need answers in order to go ahead. I even consider to prepare a XLR70 library from the same sample and run it.

      Comment


      • #33
        So one issue with the RL procedure is that you can't tell by looking at the bioanalyzer trace what % of the molecules have an adapter on both ends. So, even though those lower MW peaks may look unimportant, they may represent a large fraction of the total number of 2 adapter molecules.

        Because the RL adapters are labeled with FAM, you could use Roche's fluorometric method to determine the number of copies of the adapter you have in your sample. See how that compares to the number of fragments you have as determined by the bioanalyzer. With that information you can calculate what % of the molecules actually are templates for emPCR.

        Similarly you can use the results of your titration to back calculate.

        --
        Phillip

        Comment


        • #34
          To quantify our libraries we use only Roche´s fluorimetric method, which works poorly especially when we want to form pools. Our runs with MIDs are very heterogeneous. I questioned FAS about it as well, and he told me the right method to quantify RL is qPCR.

          I don´t know how to determine the number of molecules using bionalyzer data. Also, I don´t know how to back calculate using titraion data.

          Comment


          • #35
            How to obtain peak molarity info from BioAnalyzer data.

            Okay, well, to cover the bioanalyzer question first:

            For the bioanalyzer there are 2 easy ways if you have the .xad file and the 2100 Expert software. By the way, I am referring to Version B.02.08.SI648 of the "2100 Expert" software. If you are using a different version, obviously things may be different.

            There are two methodologies that can be used. I will call them the (1) "Peak Table" method and the (2) "Region Table method" However each has to be turned on to work. There is a different method to activate each feature.

            If the set points panel is not visible, you need to turn it on by going to the "View" menu option at the top of the window and choosing "Set points". Once you see the set points in their own panel, on the right, look at the "Smear Analysis" section of the table. To see it, you will need to choose "Advanced" from the drop-down menu near the top of the panel. If "Perform Smear Analysis" is not clicked ("x'ed") you can do method 1 below. If it is clicked method 1 will not work if you need to manually add a peak. If the box is clicked, then you can do method 2.

            (1) Peak Table Method.
            Go to sample you wish to quantitate by double-clicking it in the left panel. Not surprisingly, this method uses the "Peak Table" pane chosen by clicking on the "Peak Table" tab or pressing "alt-p".

            Before diving in I should add that it is often the case that it looks like the software went insane and chose tens of minor noise bumps on a single peak each as separate peak. You can delete each of these manually, but it can be quite time consuming. So here is a trick that will get rid of most of them. On the right side there is the set points panel labeled "General Assay Setpoints". Above those words is a drop-down menu. (Change it from "Normal" to "Advanced" if you have not done so already.) Then scroll to the bottom parameter "Peak Filter Polynom" and change from "6" to something lower. You can try "1". You have to hit "enter" afterwards or click on another parameter for the change to take effect.

            This should clear many of the stray peaks that you see caused by minor jaggedness of your trace. It may allow your main peak to actually register as a single peak. Then the DNA chip software shows you the mass and concentration of that peak.

            If not, you can add the peak manually. But you will need to turn this feature on, if you have not already. To do so, right-click in the trace pane and choose "Manual Integration". After doing this right-clicking will have some new choices on the context menu. Specifically "Add Peak" and "Remove Peak". Point at the peak you want to quantify and choose "add peak". Then adjust the borders of the peak by dragging them where you want. Alas, the default is for the blue dot that denotes the edges of the peak to follow the trace vertically. There is a way prevent it from doing so, but I don't know what it is. It is probably in the manual, though.

            [Note added later: Okay, I figured it out. First you must select the end point you want to unanchor from the trace line. Click on the end point and it will turn from blue to green. Then, after the end point is green, hold down the control key while dragging the point using your mouse. The point becomes unanchored from the trace line and can be moved vertically. While click-dragging the end point, if it crosses the trace line it will become anchored vertically until it is detached using a control-click-drag. You can drag an end point vertically across the trace line by control-click dragging.]

            (2)The Region Table method.
            Perhaps easier: again look at the right panel with all the parameters, set it to "Advanced". Look under the category "Smear Analysis". Make sure the "Perform Smear Analysis" is x'ed. If not, click in the box to change that. Then look at the tabs in the middle panel. Click the one called "Region Table". Then, in the trace window itself right-click and choose "Add Region" from the context menu. Blue lines will appear on the trace denoting the borders of the region. Adjust them to the positions you think reasonable (at the edges of your peak).

            In the table below the trace, you will see characteristics of the region. The one you would like is "Molarity". If you do not see that column in the table, right-click on the table and choose "Configure Table" from the context menu. Click on "Molarity" in the left sub-pane and then click the ">" button to move that column header to the right sub-pane.

            You can add regions to each sample. By the way, the molarity feature is only available for DNA chips, alas. Also, the upper lane standard peak is the one used to calibrate the amounts of the other peaks. So anything that throws off the bioanalyzer's estimation of the area of that peak, will also prevent you from obtaining an accurate amount estimation of your peak.

            Keep in mind that the BioAnalyzer does not distinguish between adapter+ fragments and adapter- fragments, so you really are looking at all the input DNA, not just the constructs that can be amplified during PCR. But your titration can give you that information. So can the fluorimetry, I think. But there is a factor that Roche does not take into account for that calculation. I'll go into that in another post, since this one grows too long.

            --
            Phillip
            Last edited by pmiguel; 03-07-2012, 06:01 AM.

            Comment


            • #36
              Thanks a lot, Phillip. You are the best. I will try that when I have a chance.

              Yesterday, I asked Roche´s support your question about fragment sizes for RL+. If I get the answer I will post.

              I have to say our support tech people are being very quick at answering my questions. In the end I think the problem was our starting material, which was cDNA amplified by WTA. I am trying to find out if they used any technique to remove rRNA from the sample, if not that might be the source of our problem. Super redundant sample might have behaved like amplicons, which does not work well on FLX+.
              Last edited by MissDNA; 03-07-2012, 05:47 AM.

              Comment


              • #37
                Originally posted by pmiguel View Post
                Anyone have an idea as to why on earth the recommended library molecule sizes are so high for the FLX+? Read lengths max out below 1 kb -- what is the point of having molecules 2x that length? Seems like the PCR yields would just be lower.
                I asked the same question our 1st level support: We had some bad XL+ runs (WGS and 16S amplicons) and they blamed it on the length of our libraries. They said, that the new XL+ chemistry is more sensitive to fragment length than the XLR70. I always thought that XL+ is identical to XLR70 except the volume to obtain more and longer reads. But the enzyme beads are smaller, so that more beads fit into the PTP well. The support team said, that smaller fragments causes a higher (stronger, i am not sure) signal than longer fragments; the smaller (and subsequently more) enzyme beads boost this effect with ends up in a higher amount of mixed&dots signals.

                Hm, this sounds correct, but we see in our runs no significant correlation from fragment length and output quality. We have amplicons from 500-650bp and have really good runs (read length and mb) and samples which have a permanent bad performance.

                By the way:
                the support told us, that there is no (!) sequencing facility within germany with no problems with the XL+ chemistry...
                Last edited by tokikake; 03-07-2012, 09:00 AM.

                Comment


                • #38
                  Tokikake, did you get good XL+ 16S amplicon runs?

                  I asked why amplicon sequencing was not supported for FLX+ chemistry and the asnwer was:

                  "Compensation of a higher signal with same space-time or flow (phenomena normally attributed to amplicon sequencing) is not validated by Roche in the new chemistry XL+. For your information, this statement have to do merely with the following facts:

                  (a) Amplicon samples (due to redundancy of sequence) tend to emit higher signals per flow than shotgun samples.

                  (b) XL+ chemistry uses 1.5 uM enzyme beads, which is 3:1 Luciferase to Sulfurylase bound to beads while, XLR70 Titanium chemistry uses 3uM enzyme beads, a 2:1 Luciferase to Sulfurylase ratio.

                  (c) Beads for XL+ stack different. XL+ pre- and post- layer are more concentrated (more enzyme beads) than XLR70 Titanium respective layers.

                  Then, by running ‘amplicon type samples’ with XL+ chemistry, signal per base will be much greater than XLR70. To date processing algorithm cannot fully compensate such a high occurrence in signal causing run to potentially fail."

                  Comment


                  • #39
                    Originally posted by tokikake View Post
                    I asked the same question our 1st level support: We had some bad XL+ runs (WGS and 16S amplicons) and they blamed it on the length of our libraries. They said, that the new XL+ chemistry is more sensitive to fragment length than the XLR70. I always thought that XL+ is identical to XLR70 except the volume to obtain more and longer reads. But the enzyme beads are smaller, so that more beads fit into the PTP well. The support team said, that smaller fragments causes a higher (stronger, i am not sure) signal than longer fragments; the smaller (and subsequently more) enzyme beads boost this effect with ends up in a higher amount of mixed&dots signals.
                    Okay, this almost makes sense. If you have some short (eg primer dimer) templates in a well, then the extra enzyme packed into the wells for XL+ can more easily overshoot the desired signal strength for a well and bleed over into adjacent wells.

                    Of course it is a non-response to the question of why the fragment lengths recommended by Roche for XL+ libraries are in the 1-2 kb range. I don't think it is 500 bp fragments that are causing signal bleed over issues. So it seems like going for 800 bp to 1000 or 1200 bp would be desirable.

                    --
                    Phillip

                    Comment


                    • #40
                      About Fluorimetry of Rapid Libraries.

                      Hi Miss DNA,

                      Rapid libraries use fluor-labeled oligos in their adapters. We never bother to use them for quantification but I may rethink my take on this. The problem I saw with using the adapter fluor to estimate numbers of library molecules is that if I had a pool of DNAs where most of the adapters were ligated to only one end of insert molecules, then fluorimetry would be very inaccurate. That is, I would get fairly high fluorescence, but none of the fragments would actually PCR amplify due to lack of an adapter at one end.

                      But if I knew the % of adapter-ligated fragments over all, I should be able to estimate the % of fragments with 2 adapters. That is, if my fluorimetry reading suggests that the numbers of adapters in my sample are 10% the number of insert fragments, then I would conclude that 1% of the fragments actually were viable amplicons. (0.1^2 = 0.01).

                      Sound reasonable?

                      --
                      Phillip

                      Comment


                      • #41
                        Originally posted by pmiguel View Post
                        Hi Miss DNA,

                        Rapid libraries use fluor-labeled oligos in their adapters. We never bother to use them for quantification but I may rethink my take on this. The problem I saw with using the adapter fluor to estimate numbers of library molecules is that if I had a pool of DNAs where most of the adapters were ligated to only one end of insert molecules, then fluorimetry would be very inaccurate. That is, I would get fairly high fluorescence, but none of the fragments would actually PCR amplify due to lack of an adapter at one end.

                        But if I knew the % of adapter-ligated fragments over all, I should be able to estimate the % of fragments with 2 adapters. That is, if my fluorimetry reading suggests that the numbers of adapters in my sample are 10% the number of insert fragments, then I would conclude that 1% of the fragments actually were viable amplicons. (0.1^2 = 0.01).

                        Sound reasonable?

                        --
                        Phillip
                        How can you know the % of adapter ligated fragments?

                        As I mentioned before we always use the fluorimetric method and our pools are never homogeneous. We make our 10e07 dilutions based on the data we get from Roche's calculator, and usually use 100ul of each diluted sample to form the pool. What we have been observing is that libraries that have the nicest bionalyzer trace tend to have a higher number of reads, sometimes with folds of difference. If you look at that trace I attached earlier sample 2 is the strongest, 1 weakest and 3 and 4 very similar. When we did the MID/read separation we got:

                        Region 1:
                        sample 1: 20783 reads written into the SFF file.
                        sample 2: 73270 reads written into the SFF file.
                        sample 3: 41236 reads written into the SFF file.
                        sample 4: 48575 reads written into the SFF file.

                        Region 2:
                        sample 1: 23349 reads written into the SFF file.
                        sample 2: 78820 reads written into the SFF file.
                        sample 3: 45111 reads written into the SFF file.
                        sample 4: 53903 reads written into the SFF file.

                        Comment


                        • #42
                          If your fragment lengths have an average of 1000 bp, then there are roughly 1 billion of them per ng. So you know how many total fragments you have. Your fluorimetry reading ostensibly gives you the number of amplifiable fragments. So you should be able to calculate from that the % of fragments that are amplifiable. From that calculate the % of ends that are ligated to an adapter (2x the number of molecules calculated by fluorimetry) and square that to get the number of fragments with 2 adapters.

                          --
                          Phillip

                          Comment


                          • #43
                            This has been one of the most useful threads I’ve ever found! I’m pretty much on my own when it comes to feeding the 454, and while I have the help and support of my manager, most of the rest of the tasks involved with the 454 fall on my shoulders. You folks have been so much help, in so many ways. I really appreciate it!!
                            Pmiguel – thank you for all of the Agilent information! That will be really useful. Thank you also for the info that primers and small fragments can and will bind to the larger ones and migrate along with them in the Bioanalyzer. This explains so much that I’ve dealt with over the past year. And as a side note – that reference you made to the drop-down menu on the “wells” tab in gsRunBrowser – I showed that to our interim FAS. She was not aware..
                            WPAFB – you have a 454? I didn’t know that. We should chat offline; my facility does a lot of Sanger sequencing for yours, and we’re pretty close to one another.
                            Jswebb2 – I think we had the same sales rep. We were told that we were 2nd on the upgrade list (I think you were first) for our region, and that it was scheduled to be installed in August of ’11. That was pushed back multiple times. In January (I think) of this year, we were told that we were scheduled for an install in February or March. Since our machine has yet to validated for the upgrade, and after getting all of this feedback from the forum users and a facility that has had the upgrade for a while, we’ve decided to postpone the upgrade indefinitely. We’ve only had it for 15 months or so, and are still learning; I don’t want to deal with any more headaches until I’ve conquered the one I have!
                            MissDNA – love the name! Hope you get your FLX+ working well, and soon! And thanks for posting the info about the amplicons of the plus!

                            And to address the small fragments real quick – we were told that we had them over and over again, until an FAS sat down and looked at the runs with us. What tech support was seeing were reads that were filtered out and included in the short quality category. If I remember which filters are included in this, it’s the Signal Intensity Filter, Valley Filter, and Q-score Trimming Filter. So it seems that a lot more than just real, short, products are included in the short quality filter, which was misleading to us for a long time….

                            Comment


                            • #44
                              Thanks Anthony. I wanted to use my real name as user but the forum wouldn´t allow it, so I came up with MissDNA.

                              We are still troubleshooting our bad run. I can´t blame completely 454 for it. Turned out our input sample was product of an total acid nucleic extraction, treated with or without DNAase, subjected to RT-PCR and then WTA. So all wrong. Most of our reads were human. We are now thinking about ways to better purify the sample. However I do think the run could have been better if we had used XLR70 instead of XL+ due to the high redundancy in our sample. We might try a XLR70 or move this project to Ion Torrent to reduce costs.

                              Yesterday we constructed 6 XL+ RL. The Agilent looks clean from short fragments. We showed it to our tech support and he gave us the go ahead to run. He did say, and I already had noticed, that one of the samples has more fragments longer than 2500 bp than desirable (recommended is less than 10%) but I think we will sequence it anyway. We shall have another run soon. Hopefully works well this time. I will keep you guys posted.

                              Comment

                              Latest Articles

                              Collapse

                              • seqadmin
                                Strategies for Sequencing Challenging Samples
                                by seqadmin


                                Despite advancements in sequencing platforms and related sample preparation technologies, certain sample types continue to present significant challenges that can compromise sequencing results. Pedro Echave, Senior Manager of the Global Business Segment at Revvity, explained that the success of a sequencing experiment ultimately depends on the amount and integrity of the nucleic acid template (RNA or DNA) obtained from a sample. “The better the quality of the nucleic acid isolated...
                                03-22-2024, 06:39 AM
                              • seqadmin
                                Techniques and Challenges in Conservation Genomics
                                by seqadmin



                                The field of conservation genomics centers on applying genomics technologies in support of conservation efforts and the preservation of biodiversity. This article features interviews with two researchers who showcase their innovative work and highlight the current state and future of conservation genomics.

                                Avian Conservation
                                Matthew DeSaix, a recent doctoral graduate from Kristen Ruegg’s lab at The University of Colorado, shared that most of his research...
                                03-08-2024, 10:41 AM

                              ad_right_rmr

                              Collapse

                              News

                              Collapse

                              Topics Statistics Last Post
                              Started by seqadmin, Yesterday, 06:37 PM
                              0 responses
                              10 views
                              0 likes
                              Last Post seqadmin  
                              Started by seqadmin, Yesterday, 06:07 PM
                              0 responses
                              9 views
                              0 likes
                              Last Post seqadmin  
                              Started by seqadmin, 03-22-2024, 10:03 AM
                              0 responses
                              49 views
                              0 likes
                              Last Post seqadmin  
                              Started by seqadmin, 03-21-2024, 07:32 AM
                              0 responses
                              67 views
                              0 likes
                              Last Post seqadmin  
                              Working...
                              X