SEQanswers

SEQanswers (http://seqanswers.com/forums/index.php)
-   Bioinformatics (http://seqanswers.com/forums/forumdisplay.php?f=18)
-   -   Removing duplicate fastq entries from concatenated files (http://seqanswers.com/forums/showthread.php?t=72051)

horvathdp 10-17-2016 03:22 PM

Removing duplicate fastq entries from concatenated files
 
I have concatenated two fastq files and I m pretty certain I have quite a few duplicates. Is there a script, program (something in BBMAP?) or common way to remove duplicates based on the sequence identifier (as opposed to a kmer-or sequence based method since I want to retain all unique fragments at this point)? Any assistance would be most appreciated.

GenoMax 10-17-2016 03:56 PM

Why would there be duplicates if the files came from two different lanes/flowcells?

horvathdp 10-18-2016 05:50 AM

They were not. They were two different selections (one was a selection of fragments from low copy regions of the genome based on kmers counts and the other a selection of genomic fragments that mapped to transcribed sequences) from the same sets of flow cells. Thus I am expecting a fair number of common frags from both selections.

GenoMax 10-18-2016 06:15 AM

Quote:

Originally Posted by horvathdp (Post 200014)
Is there a script, program (something in BBMAP?) or common way to remove duplicates based on the sequence identifier (as opposed to a kmer-or sequence based method since I want to retain all unique fragments at this point)?

Since you had asked about "based on sequence identifiers" originally .. but it sounds like you are just looking to de-duplicate the actual fastq reads.

dedupe.sh from BBMap is what you need. Depending on the size of your sequence file be ready to allocate adequate amount of RAM to the process.

horvathdp 10-18-2016 06:23 AM

If I ran dedupe, wouldn't that result in eliminating all duplicated kmers not just duplicated fragments? I want to assemble the resulting file, and I worry that normalizing the kmer counts to no greater than 1 would not be the best file for assembling. Am I wrong in this thinking? I toyed with the idea of just normalizing to 20 (which I intend to do at the end anyways, but figured that might leave cases where I still have more duplicate sequences than necessary.

GenoMax 10-18-2016 06:35 AM

Since dedupe can do the following

Quote:

Removes duplicate sequences, which may be specified to be exact matches, subsequences, or sequences within some percent identity.
You can specify that only exact maches over (full length) be eliminated (I assume that is what you want)?

horvathdp 10-18-2016 07:32 AM

Possibly? If that works, then why do people not just use these essentially 1X files for assembly? I normally see a 20 or 30X coverage for assemblies. This all said, do you know of way to just eliminate duplicate entries in a fastq file based on identifiers rather than sequence?

GenoMax 10-18-2016 07:48 AM

Quote:

Originally Posted by horvathdp (Post 200053)
Possibly? If that works, then why do people not just use these essentially 1X files for assembly? I normally see a 20 or 30X coverage for assemblies. This all said, do you know of way to just eliminate duplicate entries in a fastq file based on identifiers rather than sequence?

I am not sure what you are referring to here.

If one was certain to have every part of starting material covered (e.g. if we had a theoretical sequencer that started at one end of the chromosome and went through the entire length) then 1x sequencing would be enough. By using 30x you are ensuring that all sequenceable areas would be sampled (and be represented in) your data.

In theory there can be no duplicate entries as far as sequence identifiers go (if you are referring to fastq headers). You would need to cat the same file twice to make a new one.

horvathdp 10-18-2016 08:02 AM

Ahhh!!! I might have just o=found the answer to my own question:
./dedupe.sh in=concat1.merged out=depuded_concat.merged rmn=t
the rmn=f requires both sequence and identifier to be identical.

sorce https://github.com/BioInfoTools/BBMa...r/sh/dedupe.sh

GenoMax 10-18-2016 08:10 AM

I think I understand this finally ...

Original fastq dataset was sampled (two different ways) and you want to eliminate duplicates that may have been selected in both datasets leaving only one copy in the final combined file. And yes, the dedupe solution you discovered will work for that.

horvathdp 10-18-2016 08:17 AM

In answer to your question above, I did essentially (in some cases when the two different selection protocols identified the same fragment) concat the same file twice. Thus my desire to remove the duplicates. Do you follow?

horvathdp 10-18-2016 08:36 AM

Yes! Thanks

horvathdp 10-21-2016 10:18 AM

NO! sadly, dedupe.sh uses too much memory for my 250G machine for me to run this on my more than 800 million frag file.

Any other ideas that might just sort and remove the duplicates by sequence name?

GenoMax 10-21-2016 10:23 AM

How many sequences do you expect are duplicated? You could identify them (sort | uniq -d) after just pulling the headers out (grep "^@YOUR_SEQ_ID" and then remove one of the copies.

horvathdp 10-21-2016 10:29 AM

I am playing with a unix-based option right now. I made a short test list of names in a file test.txt

@HWI-D00653:49:H2FF5BCXX:2:1101:1631:2117 1:N:0:ATCACG
@HWI-D00653:49:H2FF5BCXX:2:1101:1631:2117 2:N:0:ATCACG trim=1
@HWI-D00653:49:H2FF5BCXX:2:1101:1804:2196 1:N:0:ATCACG
@HWI-D00653:49:H2FF5BCXX:2:1101:2187:2119 1:N:0:ATCACG

and am trying to see if I can regenerate a fastq file from it using the command
grep -A 3 test.txt concated.fastq >out.fastq

If it works, I can generate a unique list using grep|sort|uniq

My guess is this could take a while though as the test.txt has been running for the last 5 minutes

davstern 07-05-2019 01:20 PM

seqkit rmdup does this in a flash

https://bioinf.shenwei.me/seqkit/usage/


All times are GMT -8. The time now is 08:46 PM.

Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2020, vBulletin Solutions, Inc.