SEQanswers

Go Back   SEQanswers > Bioinformatics > Bioinformatics



Similar Threads
Thread Thread Starter Forum Replies Last Post
Extract fastq files of unaligned reads with Bowtie 2 Mad4Seq Bioinformatics 4 06-19-2013 09:53 PM
how to extract raw unaligned reads? joseph Bioinformatics 2 12-20-2011 05:24 PM
unaligned reads of tophat ae_ucla RNA Sequencing 1 04-07-2011 10:06 AM
Tophat options to report unaligned reads and controlling Bowtie options Siva Bioinformatics 0 10-15-2010 07:38 PM
Seqman Leaves most of the reads unaligned Mansequencer Bioinformatics 5 07-28-2010 02:05 PM

Reply
 
Thread Tools
Old 09-14-2010, 07:13 AM   #1
Uwe Appelt
Member
 
Location: Heidelberg, Germany

Join Date: Oct 2009
Posts: 27
Default Extract unaligned reads (Tophat) from FastQ

I would like to further examine reads that Tophat didn't manage to align in a first run and i wonder, if there is any easy way to get these reads. With Bowtie this would be easy using the "--un" argument, but Tophat doesn't seem to have smth like this. I am so far able to extract read-ids of the reads that do align by:

Code:
cut --fields=1 accepted_hits.sam | sort --unique > accepted_hits_readsIds.txt
From that point i'd need to extract fastq-entries that don't match any of the lines in the readIds file. Since FastQ-entries do not consist of single lines i got stuck here - any ideas/help would be appreciated!

Thanks in advance & Cheers
Uwe

Last edited by Uwe Appelt; 09-15-2010 at 03:10 AM.
Uwe Appelt is offline   Reply With Quote
Old 09-14-2010, 11:13 PM   #2
KevinLam
Senior Member
 
Location: SEA

Join Date: Nov 2009
Posts: 197
Default

you can try
http://www.ncbi.nlm.nih.gov/pubmed/20605925
Bioinformatics. 2010 Jul 6. [Epub ahead of print]
G-SQZ: Compact Encoding of Genomic Sequence and Quality Data.

Tembe W, Lowey J, Suh E.

Translational Genomics Research Institute, 445 N 5th Street, Phoenix, AZ 85004, USA.
Abstract

SUMMARY: Large volumes of data generated by high-throughput sequencing instruments present non-trivial challenges in data storage, content access, and transfer. We present G-SQZ, a Huffman coding-based sequencing-reads specific representation scheme that compresses data without altering the relative order. G-SQZ has achieved from 65% to 81% compression on benchmark datasets, and it allows selective access without scanning and decoding from start. This paper focuses on describing the underlying encoding scheme and its software implementation, and a more theoretical problem of optimal compression is out of scope. The immediate practical benefits include reduced infrastructure and informatics costs in managing and analyzing large sequencing data. AVAILABILITY: http://public.tgen.org/sqz. Academic/non-profit: Source: available at no cost under a non-open-source license by requesting from the web-site; Binary: available for direct download at no cost. For-Profit: Submit request for for-profit license from the web-site. CONTACT: Waibhav Tembe ([email protected]).

or maybe use bioperl
http://www.bioperl.org/wiki/Module:Bio::Index::Fastq
if the number of reads are not a lot
KevinLam is offline   Reply With Quote
Old 09-15-2010, 12:03 AM   #3
simonandrews
Simon Andrews
 
Location: Babraham Inst, Cambridge, UK

Join Date: May 2009
Posts: 869
Default

You probably need to do this in two passes. It's also a pain that tophat seems to alter the sequence id (but maybe this is because I was using paired end data?), so you have to ajdust the ids a bit.

The code below seems to work on the tophat files I just ran it against.

Code:
#!/usr/bin/perl
use warnings;
use strict;

my ($fastq,$sam,$outfile) = @ARGV;

unless ($outfile) {
  die "Usage is filter_unmapped_reads.pl [FastQ file] [SAM File] [File for unmapped reads]\n";
}

if (-e $outfile) {
  die "Won't overwrite an existing file, delete it first!";
}

open (FASTQ,$fastq) or die "Can't open fastq file: $!";
open (SAM,$sam) or die "Can't open SAM file: $!";
open (OUT,'>',$outfile) or die "Can't write to $outfile: $!";

my $ids = read_ids();

filter_fastq($ids);

close OUT or die "Can't write to $outfile: $!";


sub filter_fastq {

  warn "Filtering FastQ file\n";

  my ($ids) = @_;

  while (<FASTQ>) {

    if (/^@(\S+)/) {
      my $id = $1;

      # Remove the end designator from paired end reads
      $id =~ s/\/\d+$//;

      my $seq = <FASTQ>;
      my $id2 = <FASTQ>;
      my $qual = <FASTQ>;


      unless (exists $ids->{$id}) {
	print OUT $_,$seq,$id2,$qual;
      }
    }
    else {
      warn "Line '$_' should have been an id line, but wasn't\n";
    }

  }

}


sub read_ids {

  warn "Collecting mapped ids\n";

  my $ids;

  while (<SAM>) {

    next if (/^@/);
    my ($id) = split(/\t/);
    $ids->{$id} = 1;
  }

  return $ids;
}
simonandrews is offline   Reply With Quote
Old 09-15-2010, 04:07 AM   #4
Uwe Appelt
Member
 
Location: Heidelberg, Germany

Join Date: Oct 2009
Posts: 27
Default

Hi Simon,

thank you so much for that chunk of code, it works like a charme! I worked out a solution of my own as well, but besides the obvious (e.g. poor parsing) drawbacks, fgrep appears to consume 40Gb of RAM in order to filter for ~18e6 read ids (in accepted_readIds.txt).

Code:
cut --fields=1 ./tophat_out/accepted_hits.sam | sort --unique > ./tophat_out/accepted_readIds.txt
paste - - - - < ./reads_1.fq | fgrep --invert-match --file ./tophat_out/accepted_readsIds.txt | tr "\t" "\n" > ./readsFilt_1.fq
So thanks again and Cheers,
Uwe
Uwe Appelt is offline   Reply With Quote
Old 07-06-2011, 01:35 PM   #5
chadn737
Senior Member
 
Location: US

Join Date: Jan 2009
Posts: 392
Default

Simon, that script works wonderfully. Thanks.
chadn737 is offline   Reply With Quote
Old 08-07-2012, 04:33 AM   #6
fjrossello
Member
 
Location: Melbourne (Victoria) Australia

Join Date: Sep 2011
Posts: 30
Default

Excellent Simon. Thanks!
fjrossello is offline   Reply With Quote
Reply

Tags
tophat, unaligned, unmappable

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off




All times are GMT -8. The time now is 09:35 AM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2017, vBulletin Solutions, Inc.
Single Sign On provided by vBSSO