Go Back   SEQanswers > General

Similar Threads
Thread Thread Starter Forum Replies Last Post
Bug of Picard's Markduplicate xiang Bioinformatics 13 04-21-2014 11:36 AM
Velvet Assembler: expected coverage versus estimated coverage versus effective covera DMCH Bioinformatics 1 11-30-2011 04:21 AM
"nucleotide coverage" to genome feature coverage sheremey Bioinformatics 3 11-02-2010 11:24 AM
low 454 coverage combined with high solexa coverage strob Bioinformatics 7 10-07-2010 10:14 AM
Coverage suludana Illumina/Solexa 1 12-15-2008 10:51 AM

Thread Tools
Old 08-24-2011, 03:23 PM   #1
Junior Member
Location: USA

Join Date: Dec 2010
Posts: 2
Default MarkDuplicate, coverage


I am using MarkDuplicate.jar of Picard tools to remove duplicate reads. Then I convert bam file to pileup file to extract coverage for each residue. In the middle of that, I have two kinds of bam, bam before MarkDuplicate.jar and bam after MarkDuplicate.jar. I generate pileup file for each bam file by using samtool's pileup. Then I get the coverage for each residue from pileup file. The coverage value is different between bam before MarkDuplicate.jar and bam after MarkDuplicate.jar? I want to use coverage value for CNV analysis, so it is very critical for my analysis. Could you let me know if someone has the same kind of work. Thank you.
mikyi is offline   Reply With Quote
Old 08-25-2011, 04:45 AM   #2
Senior Member
Location: St. Louis

Join Date: Dec 2010
Posts: 535

Maybe I'm misunderstanding, but if you remove duplicates you would expect to see less coverage. You should definitely use the coverage values after removing duplicates (your second bam file).
Heisman is offline   Reply With Quote
Old 11-20-2011, 11:32 PM   #3
Junior Member
Location: Sweden

Join Date: Feb 2009
Posts: 5

No question is asked here really but I gather you are interested in knowing whether you should keep working with the duplicated removed or not for your CNV analysis.

The answer to such a question would be dependent on, among other things, your depth of sequencing, your samples (single individuals or pools) and your sequencing strategy (i.e paired reads or fragments).

A really high depth (for example 10000x) would give loads of duplicate reads even if PCR-duplicates were not present, especially if you are sequencing fragments but also paired reads if mean depth is sufficiently high and your insert size distribution is not tight.

At insane mean depths (for example from enrichment studies) this will lead to all or close to all reads being treated as duplicates, which means you will only count each read once, which ultimately will mean that you have no chance of differentiating a CNV from normal sequence.

Thus, I would be careful about just assuming dip-removal to be good for all kinds of analyses.
Calle is offline   Reply With Quote

coverage, markduplicate, pileup

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

All times are GMT -8. The time now is 02:31 PM.

Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2019, vBulletin Solutions, Inc.
Single Sign On provided by vBSSO