Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Target Enrichmnet In-Solution

    We are planning to do the in In-Solution enrichment SureSelect from Agilent for our samples. Does anybody have experience with the protocol? And is there any other possibility known to concentrate the samples without using a speed vac?
    We also decided to pool our samples and sequence them in just one lane. What do you think is it better to do enrichment with pooled libraries or mix them after capture? You see we really need help, its the first time for us!

    Thanks!

  • #2
    While not recommended by Agilent, we have some success with indexing and pooling 4 samples together before enrichment. The problem is getting the libraries perfectly balanced in terms of DNA added before and after enrichment. Measuring the DNA or number of fragments is really tricky - we tried both Nanodrop and Bioanalyzer, but its hard to know yet for us which one is better (doesn't look like it made much difference).

    For example, on a 4-plex (6Mb target space exonic sequence enrichment) we get 24%,28%,33%,and 15% distribution of the reads / sample after enrichment. So not a perfect 25% for each, but it really depends on your requirements. We do mutation (SNV) detection and for us, we get 60X to 120X mean read depth per target bp with this 4-plex. More than enough to detect SNV and indel mutations reliably (you need 15x for homozygous and 20x-30x for heterozygous).

    On a duplex - we get nearly perfect 50/50 read distribution.

    We also will try QPCR to measure before mixing, but getting slots on the sequencer has been difficult and we don't know our results yet from that.

    We have a paper written up already on the method - of course I imagine we'll get criticism with Agilent's indexing approach now available. Agilent now offers indexing for its sureselect to be done *after* enrichment in combination with Illumina's multiplex kit, but I don't think there is much advantage to that, other than avoiding the biases in enrichment (some samples may capture better than others). But keep in mind with Agilent's approach you must buy a separate reaction kit for each sample, enrich them separately, and then mix them together. Not to mention you must buy Illumina's multiplexing kit on top of the Agilent indexing kit. Also, I heard from an Agilent rep that you must buy 100 sureselect reactions to make it cost effective.

    We however do it all in one sureselect reaction kit.

    I think a 4-plex or 5-plex is probably reasonable with our method, but going higher will make it more difficult (not sure how well a 12plex would work!).

    also, since the barcodes/indexes cause problems with the basecalling - you will want to run a unbarcoded sample in one lane along side your multiplexed lanes to act as a control lane just in case.

    Comment


    • #3
      Thank you, these are very helpful informations! We'd like to try it with 5 samples. Are you increased the sample concentration for enrichment? Or done capture with ~100 ng for each sample?
      Is your paper already available? Im really interested in!
      I think many scientist will try to index their sample in a more efficient way. So Agilent had to offer better solutions for this.

      Comment


      • #4
        Another option is independently hybridizing with scaled down reactions (1/5 volume). Depending on the targeted library size, the amount of capture library can also be scaled down. We have had success using targeted capture with this method. A particular advantage is that samples can be pooled after enrichment in order to saturate sequencing capacity.

        Comment


        • #5
          Hello All,

          I have been reading all your posts for a while and find them very useful!
          Thank you for bring this questions up. I have another one. Are you using SureSelect AB Barcode Adaptor Kit or barcodes and adaptors in SOLiD (Solexa) kit?
          Thanks!

          Comment


          • #6
            NGSfan,

            If you don't mind my asking, what read coverage rates are you seeing for the ccds exons at your various sequencing depths? (e.g. what % of the ccds exons are covered by 8 or more reads at a given mean sequencing depth) I've been processing a few SureSelect capture samples and see 8x+ coverage rates of 60-75%, with a max of 80% even at very deep levels of sequencing. The dropouts are consistent across all samples and widely dispersed peppering most exons.

            Rather discouraging!

            Comment


            • #7
              Originally posted by Nix View Post
              NGSfan,

              If you don't mind my asking, what read coverage rates are you seeing for the ccds exons at your various sequencing depths? (e.g. what % of the ccds exons are covered by 8 or more reads at a given mean sequencing depth) I've been processing a few SureSelect capture samples and see 8x+ coverage rates of 60-75%, with a max of 80% even at very deep levels of sequencing. The dropouts are consistent across all samples and widely dispersed peppering most exons.

              Rather discouraging!
              Hmmm, I'm not sure I have directly comparable statistics at the moment, but I can show you what I have so far. Your 8x coverage depth is a higher "cutoff" measure than mine, I will go back and calculate that. I should do a cumulative distribution perhaps, no?

              We have designed SureSelect kit with a target region of about 3Mb, covering 1000 coding genes (~15,000 exons). We have 3X overlap of baits for target regions whenever possible by eArray.

              We ligate barcodes to four fragmented samples and enrichment simultaneously on one Sure Select kit. We sequenced 76-bp single end.

              Below I have some stats in CSV format, here is a brief description of the columns:

              Sample = our sample ID
              TargetRegions = # of continuous target regions (merged baits)
              ReadsInTargetRegions = # of reads falling inside a target region (minimum 1bp overlap)
              PctBpCovered = % of target bases covered by at least 1 read
              PctRegionsNoReads = % of target regions not covered by any reads
              MedianDepthBp = median depth of basepair coverage
              MeanDepthBp = mean depth of basepair coverage


              Sample,TargetRegions,ReadsInTargetRegions,PctBpCovered,PctRegionsNoReads,MedianDepthBp,MeanDepthBp
              s_2_LIB043,14705,2257823,94.3,2.7,35.1,42
              s_2_LIB044,14705,2395733,95.8,2.1,35.4,44.2
              s_2_LIB045,14705,2672075,95.8,2,42,49.9
              s_2_LIB046,14705,3239879,96.2,1.9,48.9,60.2

              I will comeback with some comparable statistics.

              Comment


              • #8
                Originally posted by upenn_ngs View Post
                Another option is independently hybridizing with scaled down reactions (1/5 volume). Depending on the targeted library size, the amount of capture library can also be scaled down. We have had success using targeted capture with this method. A particular advantage is that samples can be pooled after enrichment in order to saturate sequencing capacity.

                This is an interesting idea! Is your design about 3Mb? And you are able to split a single sureselect reaction into 5 parts, if I understood correctly?

                Comment


                • #9
                  Originally posted by NGSfan View Post
                  This is an interesting idea! Is your design about 3Mb? And you are able to split a single sureselect reaction into 5 parts, if I understood correctly?
                  Yes; but our target was 100kb. We captured samples at 1/5 volume, as well as 1/5 volume and 1/10 bait; so the 1/5 volume and 1/10 bait had 1/50 the original volume of capture per sample, and re-sequencing still yielded >500x unique reads per base.

                  Comment


                  • #10
                    Originally posted by upenn_ngs View Post
                    Yes; but our target was 100kb. We captured samples at 1/5 volume, as well as 1/5 volume and 1/10 bait; so the 1/5 volume and 1/10 bait had 1/50 the original volume of capture per sample, and re-sequencing still yielded >500x unique reads per base.
                    Did you tile your bait design?

                    In another thread I posted a question on the bait to sample ratio (http://seqanswers.com/forums/showthread.php?t=6091)

                    From your answer I get that this ratio is in fact not that critical?

                    Comment


                    • #11
                      Originally posted by JUdw View Post
                      Did you tile your bait design?

                      In another thread I posted a question on the bait to sample ratio (http://seqanswers.com/forums/showthread.php?t=6091)

                      From your answer I get that this ratio is in fact not that critical?
                      The agilent multiplex protocol suggests that the ratio of gDNA:RNA should be based on the target size. It is important to keep in mind that the libraries are constructed on a 55,000 feature array; which is amplified with a T7 promoter; then transcribed into single-stranded RNA. For each targeted molecule printed to the array, there are many thousands of RNA oligos.

                      Comment

                      Latest Articles

                      Collapse

                      • seqadmin
                        Strategies for Sequencing Challenging Samples
                        by seqadmin


                        Despite advancements in sequencing platforms and related sample preparation technologies, certain sample types continue to present significant challenges that can compromise sequencing results. Pedro Echave, Senior Manager of the Global Business Segment at Revvity, explained that the success of a sequencing experiment ultimately depends on the amount and integrity of the nucleic acid template (RNA or DNA) obtained from a sample. “The better the quality of the nucleic acid isolated...
                        03-22-2024, 06:39 AM
                      • seqadmin
                        Techniques and Challenges in Conservation Genomics
                        by seqadmin



                        The field of conservation genomics centers on applying genomics technologies in support of conservation efforts and the preservation of biodiversity. This article features interviews with two researchers who showcase their innovative work and highlight the current state and future of conservation genomics.

                        Avian Conservation
                        Matthew DeSaix, a recent doctoral graduate from Kristen Ruegg’s lab at The University of Colorado, shared that most of his research...
                        03-08-2024, 10:41 AM

                      ad_right_rmr

                      Collapse

                      News

                      Collapse

                      Topics Statistics Last Post
                      Started by seqadmin, 03-27-2024, 06:37 PM
                      0 responses
                      13 views
                      0 likes
                      Last Post seqadmin  
                      Started by seqadmin, 03-27-2024, 06:07 PM
                      0 responses
                      12 views
                      0 likes
                      Last Post seqadmin  
                      Started by seqadmin, 03-22-2024, 10:03 AM
                      0 responses
                      53 views
                      0 likes
                      Last Post seqadmin  
                      Started by seqadmin, 03-21-2024, 07:32 AM
                      0 responses
                      69 views
                      0 likes
                      Last Post seqadmin  
                      Working...
                      X