SEQanswers

Go Back   SEQanswers > Sequencing Technologies/Companies > 454 Pyrosequencing



Similar Threads
Thread Thread Starter Forum Replies Last Post
Why so many unaligned reads inspite of "similarity" of any 2 human genomes? nadir Bioinformatics 0 04-06-2011 03:15 AM
Error rates in 454 FLX/Titanium reads Marcus 454 Pyrosequencing 3 12-10-2010 11:11 AM
Peak finding with MACS: Questions on # of reads and "strand bias" jjw14 Bioinformatics 0 07-29-2010 06:17 AM
definition of "fragment" in FPKM in single end reads thinkRNA Bioinformatics 1 06-25-2010 06:00 AM
SEQanswers second "publication": "How to map billions of short reads onto genomes" ECO Literature Watch 0 06-29-2009 11:49 PM

Reply
 
Thread Tools
Old 03-27-2009, 09:32 AM   #1
[c]oma
Junior Member
 
Location: Planet Earth

Join Date: Nov 2008
Posts: 5
Default Duplicate reads ("same start" reads) in 454 FLX/Titanium shotgun runs

Hi all,

I have been performing an in-depth quality analysis of some of our 454 whole-genome shotgun runs for a fungal species (~35-70 Mb genome) and plant species (~1 Gb genome) from both FLX and Titanium runs. In both datasets between 15 and 35% of the reads in each individual run are duplicate reads, i.e. the first 100 nt or more are exactly same and they start at exactly the same nucleotide. Even though both genomes are repetitive (to some extent), this is far more than expected by chance alone. Our hypothesis at the moment is that these duplicates are a result of the emulsion PCR step, but we think the percentage is really on the high side! Between runs from the same library there are not so many duplicates, so it is not a library issue. Furthermore we observe roughly the same numbers for paired-end libraries, so this confirms our hypothesis of this being an emPCR problem.

Does anyone here have any experience with such analyses, and if so, do you find similar numbers?

Last edited by [c]oma; 03-27-2009 at 09:34 AM.
[c]oma is offline   Reply With Quote
Old 03-30-2009, 04:59 AM   #2
joa_ds
Member
 
Location: belgium

Join Date: Dec 2008
Posts: 52
Default

I can only tell you that the emulsion pcr has huuuuge quantification bias.

When we were running amplicon sequencing, we normalized the amplicon dna amount before the emulsion pcr and after the emulsion pcr there were fragments there was sometimes 100fold differences between coverage of certain amplicons.

So yeah, it could be that certain sequences in the emPCR are preferentially amplified. Are those sequences the short sequences?

How does the experiment work, is there any amplification prior to the emulsion pcr?
joa_ds is offline   Reply With Quote
Old 03-30-2009, 05:17 AM   #3
[c]oma
Junior Member
 
Location: Planet Earth

Join Date: Nov 2008
Posts: 5
Default

Some of the duplicated reads are indeed short sequences, but a good majority of them fall inside the normal sequence length distribution. There is no PCR step involved prior to the emPCR as far as I know, since it's a genomic shotgun library.

As for amplicon sequencing bias, a collegue of mine pointed me to this publication (granted, it's about Illumina data, but some of it might also apply to 454). Maybe it can help you understand your data better

I realize now that I underestimated the contribution of repeats to this phenomenon, so I am currently looking into that. Nonetheless any insight into this is appreciated!
[c]oma is offline   Reply With Quote
Old 03-30-2009, 05:58 AM   #4
kmcarr
Senior Member
 
Location: USA, Midwest

Join Date: May 2008
Posts: 1,177
Default

[c]oma,

We did have a researcher report discovering this in their data as well. I never had a chance to follow up on any other samples so I can't say how common this problem is on our hands. Our immediate thought was duplicates generated during the library amplification since that is the most logical explanation. Like your case though there was evidence that it was not occurring at this step (different duplicates observed from multiple runs of the same library). The only explanation we could think of is that during the emulsification step some micelles (micro reactors in 454 speak) were created that contained a single DNA molecule but multiple beads. This was only a hypothesis, we never did anything to test. Maybe when I get some free time (yeah, right) I'll look at some of our other 454 runs to see how many duplicates may exist
kmcarr is offline   Reply With Quote
Old 03-30-2009, 11:54 AM   #5
behoward
Member
 
Location: USA

Join Date: Mar 2009
Posts: 13
Default I think I have noticed the same thing...

I am looking at a public 454 GS20 dataset from the paper "Sampling the Arabidopsis Transcriptome with massively parallel pyrosequencing" (Weber et al, Plant Physiology May 2007) This was actually an early 'RNA-Seq' experiment, not a genome sequencing project. I have never worked with the equipment, though, so I'm no expert here.

In any event, there seems to be an unexpected number of reads that are duplicates (multiple reads with exactly the same read start position and read length.) Often you can see the exact same read two or three times, and in one extreme case (the extremely highly-expressed Rubisco gene), there are about 4000 duplicates of one short 72bp read.

In some cases, I suppose, the duplicates could be a result of the end of a transcript... i.e. any fragment starting x bp before an end of transcription will have the same length. But a lot of these reads occur in the middles of known gene models. Maybe occassionally they are short non-coding RNA. But there are so many that it seems like it must be a technical bias...
behoward is offline   Reply With Quote
Old 03-30-2009, 12:49 PM   #6
kmcarr
Senior Member
 
Location: USA, Midwest

Join Date: May 2008
Posts: 1,177
Default

I am quite familiar with that data set! As you pointed out this is an RNA sequencing project so that is an added complication. The cDNA was generated using the Clontech SMART PCR protocol which is supposed to generate full length cDNAs but this could introduce a bias. Also, if you look at some of the supplementary figures for the paper you will see that there is a bias for reads starting at the 5' or 3' end of the predicted cDNA. This is to be expected. Unlike genomic DNA where the fragmentation should produce random start points, cDNA will always have the fixed end points to start sequencing from.

An important point to understand about the chemistry of the 454 sequencer is that if two reads start at exactly the same point and there are no missed or extra incorporations then they will end at exactly the same point. The 454 runs a fixed number of sequencing cycles so the read length is going to be fixed for a given sequence. The GS20 (which was used for this study) ran 42 cycles with the base order TACG. If the bases in the sample are randomly distributed you should see on average 2.5 bases incorporated per cycle or an average read length of 105nt. If the bases happen to be a repetitive stretch in the exact same order as the flow cycle you would get 4 bases incorporated per cycle for a read length of 168nt. If the base order of the read were adverse to the flow order you can see that you would end with a length shorter than the expected 100nt. This may explain the 4000 72nt RuBisCO reads. This library was prepared from leaf and was not normalized. There was toooooooon of RuBisCO mRNA present. In fact only 10 transcripts (RuBisCO and chlorophyll subunits) accounted for >50% of all reads.
kmcarr is offline   Reply With Quote
Old 03-30-2009, 01:13 PM   #7
behoward
Member
 
Location: USA

Join Date: Mar 2009
Posts: 13
Default thanks!

Hi kmcarr,

Thanks for the response One thing I don't quite understand is when you say "if two reads start at exactly the same point and there are no missed or extra incorporations then they will end at exactly the same point." For this (RNA-Seq) data set, my understanding is that they did a nebulization step to randomly shear the cDNA. Is it possible that there could be differing length fragments at the same start position for this reason? Or should all the fragments be much larger than 100 or so nucleotides? I didn't see any explicit mention of a size selection step.

Also, I have been looking at the read length distribution and there is one peak at around 70 nt or so, and one peak at around 100 nt. I normalized by transcript so that I only count a random 3 reads per transcript. This way genes like rubisco won't have an unfair contribution. What do you think would cause this bimodal distribution? It's important that I understand this because we are trying to model these distributions in an analysis method we are working on.

Brian
behoward is offline   Reply With Quote
Old 03-30-2009, 02:09 PM   #8
kmcarr
Senior Member
 
Location: USA, Midwest

Join Date: May 2008
Posts: 1,177
Default

Yes, the cDNA is nebulized and the average fragment size should be 500-800 bp. (There is an issue that nebulization won't break dsDNA less than ~700 bp so that is another complication when dealing with cDNA; the shearing is not as "random" as it is with genomic DNA.) There is a size selection to remove fragments < 300 bp so the vast majority of the library sequences should be much longer than the expected 100 nt (for GS 20) sequence length.

If I have 10 fragments all originating from the 5' end of 10 copies of the same cDNA but all of varying lengths between 500-800 nt, and the 454 adapters are ligated in the same orientation, then I should get exactly the same sequence from all of them. The sequence will start at the 5' end and will stop when the machine has completed its 42 cycles, regardless of how long the inserted fragments are. This is what I mean by "if two reads start.....".

I would have to look at the data to be sure but I think that the bimodal distribution is an artifact of the cDNA preparation method and the read trimming process. Reads originating from either the 5' or 3' ends of cDNAs would include the SMART kit adapters which would then be clipped off by our trimming pipeline. The most commonly trimmed size from the 5' end is ~30nt. Reads not including these adapters would be closer to their full read length (~100nt) after trimming.
kmcarr is offline   Reply With Quote
Old 03-30-2009, 03:32 PM   #9
behoward
Member
 
Location: USA

Join Date: Mar 2009
Posts: 13
Default

thanks, that's very helpful information. I will go back and check if the short pile of reads correlates with the ends of known gene models...
behoward is offline   Reply With Quote
Old 04-01-2009, 01:09 AM   #10
jpp
Junior Member
 
Location: the earth

Join Date: Mar 2009
Posts: 3
Default

[c]oma,
We have found the same problem with our runs and found similar numbers. We think there is an inherent problem related with the emPCR. We have asked the provider technical assistance and after many inquires they told the normal range is around a 18%. By the way, we have not received any advice to reduce it.
jpp is offline   Reply With Quote
Old 04-01-2009, 06:21 AM   #11
[c]oma
Junior Member
 
Location: Planet Earth

Join Date: Nov 2008
Posts: 5
Default

Thank you all for your replies. It is reassuring to see others are seeing the same things we are, so it doesn't seem to be something we are doing wrong. But I still don't really like it...
[c]oma is offline   Reply With Quote
Old 04-01-2009, 11:11 AM   #12
jnfass
Member
 
Location: Davis, CA

Join Date: Aug 2008
Posts: 88
Default

I'll chime in to say that I've heard (through a colleague, who heard from someone else, etc.) that this is indeed an artifact of the emulsion PCR, where either (like kmcarr's explanation) droplets contained multiple beads but one piece of DNA, or DNA escapes from droplets during the PCR and colonizes empty beads ... in any case, same read start and stop, and base calls.

Note that newbler (gsAssembler) and gsMapper account for this by default; I don't know if they collapse identical reads and then treat them as one read, or if they collapse them to one, but add to the base qualities because of the technical replication, but in any case the code is "aware" of this issue. Doesn't help if you're not exclusively using the 454 pipeline, though. I've used CD-HIT to cluster near 100% identical reads with 1 or 2 overhanging bases ...
jnfass is offline   Reply With Quote
Old 04-16-2009, 03:53 PM   #13
anar
Junior Member
 
Location: New Zealand

Join Date: Aug 2008
Posts: 6
Default

Quote:
Originally Posted by jnfass View Post
Note that newbler (gsAssembler) and gsMapper account for this by default; I don't know if they collapse identical reads and then treat them as one read, or if they collapse them to one, but add to the base qualities because of the technical replication, but in any case the code is "aware" of this issue.
Can anyone provide insight on how exactly newbler and gsMapper "account for" stacking reads? Or know where this is documented? This could be crucial for certain sequencing designs/applications...
anar is offline   Reply With Quote
Old 08-19-2009, 03:19 PM   #14
greigite
Senior Member
 
Location: Cambridge, MA

Join Date: Mar 2009
Posts: 141
Default stacking by chance alone?

Thought I'd bring this topic back up again to see if anyone can offer some additional advice. We are seeing this stacking effect in our shotgun library (reads with the same start). however, we have a dominant organism (~75% of the sample) which leads to an extremely high read depth in some regions (>700X). Couldn't we get reads starting in the same position by chance alone with such a high depth? Naively, let's look at a 500 bp region with 1000x coverage. Say one new read starts every 5 bp in the region, meaning that there are 100 total read starts. 1000x coverage/100 read starts = 10x coverage per read start by chance alone. How can this be differentiated from the duplicate read effect generated by emPCR? By read length or identity over the whole read? Could we implement a rule along the lines of that if reads start in the same position, but are different in length by more than 1% (could get incorporation errors changing the length of duplicate reads) then they are not duplicates?

There's also an interesting twist in some cases. In one instance, a bunch of reads start in the same location with a homopolymer run (say TTTT). Some reads have 3 T's, some have 2 T's, some have 4 T. Should we interpret this as being sequencing error alone?
greigite is offline   Reply With Quote
Old 08-19-2009, 09:17 PM   #15
cgb
Member
 
Location: Cambridge

Join Date: May 2008
Posts: 50
Default

might be your library prep. see the Turner et al paper.
cgb is offline   Reply With Quote
Old 08-20-2009, 09:24 AM   #16
greigite
Senior Member
 
Location: Cambridge, MA

Join Date: Mar 2009
Posts: 141
Default

Could you give a citation? I'm not familiar with the Turner et al paper you mention.
greigite is offline   Reply With Quote
Old 08-20-2009, 10:19 AM   #17
kmcarr
Senior Member
 
Location: USA, Midwest

Join Date: May 2008
Posts: 1,177
Default

Quote:
Originally Posted by greigite View Post
Could we implement a rule along the lines of that if reads start in the same position, but are different in length by more than 1% (could get incorporation errors changing the length of duplicate reads) then they are not duplicates?
This rule would only work if you are sequencing the entire fragment; that is to say the sequence is reaching the 3' adapter and thus you can determine the exact size of the original fragment. This should not be the case. The average fragmentation size should be sufficiently large (500 - 800 bp) such that the 200 cycles of sequencing on the FLX Titanium never reach the 3' adapter. Therefore you would have no way of knowing for a given read what it's underlying fragment size is.
kmcarr is offline   Reply With Quote
Old 08-20-2009, 04:41 PM   #18
greigite
Senior Member
 
Location: Cambridge, MA

Join Date: Mar 2009
Posts: 141
Default

Quote:
Originally Posted by kmcarr View Post
This rule would only work if you are sequencing the entire fragment; that is to say the sequence is reaching the 3' adapter and thus you can determine the exact size of the original fragment. This should not be the case. The average fragmentation size should be sufficiently large (500 - 800 bp) such that the 200 cycles of sequencing on the FLX Titanium never reach the 3' adapter. Therefore you would have no way of knowing for a given read what it's underlying fragment size is.
I think I'm missing something here. If I understood your earlier post correctly (quoted below), shotgun fragments that are randomly sheared at exactly the same location on one end should also have the same sequence length, regardless of the size of the original fragment. Why would it be necessary to know the underlying fragment size?

Quote:
Originally Posted by kmcarr View Post
<snip>
If I have 10 fragments all originating from the 5' end of 10 copies of the same cDNA but all of varying lengths between 500-800 nt, and the 454 adapters are ligated in the same orientation, then I should get exactly the same sequence from all of them. The sequence will start at the 5' end and will stop when the machine has completed its 42 cycles, regardless of how long the inserted fragments are. This is what I mean by "if two reads start.....".
greigite is offline   Reply With Quote
Old 08-20-2009, 07:17 PM   #19
kmcarr
Senior Member
 
Location: USA, Midwest

Join Date: May 2008
Posts: 1,177
Default

Yes, you've got it. Your question (at least as I understood it) was whether or not you could differentiate between independent fragments which happen to share the same 5' end and PCR duplicates by the length of the read obtained. No, you can't. Using 454 cycle sequencing, if two reads start at the same position they will end up being exactly the same length as each other, whether they were originally independent fragments or PCR generated duplicates. This statement relies on an assumption of 454 library preparation that the fragment size if your library is longer than the expected read length. That is the only point I was making about original fragment sizes.
kmcarr is offline   Reply With Quote
Old 08-20-2009, 07:39 PM   #20
greigite
Senior Member
 
Location: Cambridge, MA

Join Date: Mar 2009
Posts: 141
Default

OK, I understand now- very good point and thanks for elaborating. Is there another way you suggest that might work to differentiate between fragments sharing the same 5' end and PCR duplicates? Perhaps there is no way to do it when the shotgun fragments are drawn from a large population of an organism with very little variation, such that even two fragments which happen to share the same 5' end are 100% identical. I guess one has to assume the relative probabilities of PCR duplication versus shearing at the same point, and this probably scales with the coverage (e.g. for lower coverage regions if two reads start at the same point they are more likely to be PCR duplicates).

Quote:
Originally Posted by kmcarr View Post
Yes, you've got it. Your question (at least as I understood it) was whether or not you could differentiate between independent fragments which happen to share the same 5' end and PCR duplicates by the length of the read obtained. No, you can't. Using 454 cycle sequencing, if two reads start at the same position they will end up being exactly the same length as each other, whether they were originally independent fragments or PCR generated duplicates. This statement relies on an assumption of 454 library preparation that the fragment size if your library is longer than the expected read length. That is the only point I was making about original fragment sizes.
greigite is offline   Reply With Quote
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off




All times are GMT -8. The time now is 03:21 PM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2020, vBulletin Solutions, Inc.
Single Sign On provided by vBSSO