Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Large discrepancy between de novo assembly versus actual biological genome size

    Hello everyone,

    I’m in the midst of assembling a eukaryotic genome for the first time, working in a non-model plant species, and I could use some insight: my data consists of reads from a full lane of Illumina HiSeq V4 2x125 sequences with insert size ~350. Before starting my assembly, I used flow cytometry to estimate nuclear genome 2C content, which returned 2C = 0.82pg DNA or about 800Mb, for a haploid genome size of about 400Mb. However, kmer-counting programs such as Jellyfish have predicted an assembly size of less than half that number, at about 190Mb, and sure enough- when I conduct the assemblies, the sum of scaffold lengths are always in the range of 170-215Mb.

    Does anyone have any idea why the nuclear genome size is so much larger than what I’ve been able to assemble? My first hypothesis is heavy repeat content, but I need to find a way to demonstrate this hypothesis is supported by my reads, and I’m brand new to looking into repeats; I’m sure there are a sizeable set of repeats in my organism’s genome, but is there a way to estimate the approximate density of repeats as a percent of the total genome, given that I’m confident in my nuclear genome size?

    Any related thoughts/comments would be, by me, appreciated!

  • #2
    Originally posted by NYGen View Post
    Hello everyone,

    I’m in the midst of assembling a eukaryotic genome for the first time, working in a non-model plant species, and I could use some insight: my data consists of reads from a full lane of Illumina HiSeq V4 2x125 sequences with insert size ~350. Before starting my assembly, I used flow cytometry to estimate nuclear genome 2C content, which returned 2C = 0.82pg DNA or about 800Mb, for a haploid genome size of about 400Mb. However, kmer-counting programs such as Jellyfish have predicted an assembly size of less than half that number, at about 190Mb, and sure enough- when I conduct the assemblies, the sum of scaffold lengths are always in the range of 170-215Mb.

    Does anyone have any idea why the nuclear genome size is so much larger than what I’ve been able to assemble? My first hypothesis is heavy repeat content, but I need to find a way to demonstrate this hypothesis is supported by my reads, and I’m brand new to looking into repeats; I’m sure there are a sizeable set of repeats in my organism’s genome, but is there a way to estimate the approximate density of repeats as a percent of the total genome, given that I’m confident in my nuclear genome size?

    Any related thoughts/comments would be, by me, appreciated!
    My guess would be your flow cytometry result was wrong. Could be endo-reduplication or bad size standards throwing you off.

    Since a 200-300Mb genome is probably about 10X easier to assemble than a 800Mb genome, count your blessings.

    I hear you about repeats -- I would like to see a transposable element-aware assembler that tackled the repetitive fraction of the genome first.

    --
    Phillip

    Comment


    • #3
      I do not know how the flow cytometry measurment works but 800=4*200, are you sure your plant is not tetraploid?

      Comment


      • #4
        @pmiguel - I doubt that the FCM analysis is off, as we did 3 replicates and they were consistent around the value from above. I hear you, though, about the possibility of standards being off, so I'm also having two sister species that frequently hybridize with my species of interest estimated for nuclear genome content. Do you think I should also send more samples of my species of interest? I suppose if it is a standard-based error, then I should definitely send them again; I was actually going to estimate the sister taxa anyways. Perhaps if I get the results of the FCM analysis with the sister taxa and they are divergent either from my species of interest or each other, then I'll plan to send more samples of the species whose genome I'm assembling.

        @Chipper - good catch. That's been on my mind for awhile now. My species of interest is a part of a clade where each member has diploid chromosome count of 2m, where m is the 2n chromosome number of every species of the outgroup- my species is probably an ancient polyploid along with the rest of its clade. However, I'm unconvinced that I can treat this genome as coming from a polyploid because of a recent congeneric genome that was published that estimates repeat content of >50%. So, if I assume that my hi-seq reads are unable to span the majority of repeat elements, do you think there's a basis for suspecting that I'm only assembling half of the ultimate haploid genome size as a result of the repeat structures?

        Thanks for your thoughts!

        Comment


        • #5
          Dear NYGen,
          I have the same problem with my plant genome.
          Did you find any conclusion to it?

          Comment


          • #6
            Hey GAFA, I would look into estimating repeat content, which you can do with Repeat Explorer (there was at my last check a Galaxy server specifically for doing this analysis quickly in a GUI). My conclusion for my original problem was that my discrepancy occurred because of a combination of: 1) ancient tetraploidy, and more interestingly 2) high-density repeat content scenario that confounds the de Bruijn graph-based de novo assembly approach.

            First order of business is probably looking for similar analysis already being in similar taxa, if you're lucky enough to have a popular study system w/ at least one post-draft, established genome. Happy to help further, let me know.

            Comment

            Latest Articles

            Collapse

            • seqadmin
              Strategies for Sequencing Challenging Samples
              by seqadmin


              Despite advancements in sequencing platforms and related sample preparation technologies, certain sample types continue to present significant challenges that can compromise sequencing results. Pedro Echave, Senior Manager of the Global Business Segment at Revvity, explained that the success of a sequencing experiment ultimately depends on the amount and integrity of the nucleic acid template (RNA or DNA) obtained from a sample. “The better the quality of the nucleic acid isolated...
              03-22-2024, 06:39 AM
            • seqadmin
              Techniques and Challenges in Conservation Genomics
              by seqadmin



              The field of conservation genomics centers on applying genomics technologies in support of conservation efforts and the preservation of biodiversity. This article features interviews with two researchers who showcase their innovative work and highlight the current state and future of conservation genomics.

              Avian Conservation
              Matthew DeSaix, a recent doctoral graduate from Kristen Ruegg’s lab at The University of Colorado, shared that most of his research...
              03-08-2024, 10:41 AM

            ad_right_rmr

            Collapse

            News

            Collapse

            Topics Statistics Last Post
            Started by seqadmin, Yesterday, 06:37 PM
            0 responses
            10 views
            0 likes
            Last Post seqadmin  
            Started by seqadmin, Yesterday, 06:07 PM
            0 responses
            9 views
            0 likes
            Last Post seqadmin  
            Started by seqadmin, 03-22-2024, 10:03 AM
            0 responses
            50 views
            0 likes
            Last Post seqadmin  
            Started by seqadmin, 03-21-2024, 07:32 AM
            0 responses
            67 views
            0 likes
            Last Post seqadmin  
            Working...
            X