Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Split fastq into smaller files

    Dear all,

    I'm looking into splitting a FASTQ read file into several smaller sized files. It's basically just distributing batches of 4 lines into a certain number of files.

    I'm trying with
    Code:
    split -l <number of lines per file> <FASTQ>
    which works of course, but is too slooooooooooow on a HiSeq read file.

    Any recommendations for faster splitting? awk, sed?

    Thanks!

  • #2
    I haven't tried this, but would it be faster if you specified a file size rather than a line number?

    Comment


    • #3
      I really don't see anything faster than split (unless you want to parallelize it and let each subroutine extract certain parts of the file) (using e.g. awk).

      But for really large files the time for counting the lines (for input to awk) would also take a lot of time...

      I would just split it as you do...

      [palle@s01n11 3_adapter_trimmed]$ time split -l 4000000 fastqfile

      real 2m17.853s
      user 0m2.640s
      sys 0m17.980s

      For a 16 gig file - that is ok.

      >cat fatq |grep -e "^@# |wc -l

      69332456 fastq records

      Primer for awk:

      fq=...
      from=0
      to=4000000
      time cat $fq | awk "NR > $from && NR < $to" >xaa
      cat xaa | grep -e "^@" |wc -l

      You could do this in a simple for loop in bash and submit each cat|awk to seperate nodes of a cluster.... but I doubt it's worth the hassle.... submit all your splits to a cluster and go grab a cup of coffee...

      Edit: time cat $fq | awk "{ if (NR < $from) next; if (NR < $to) print; if (NR >= $to) exit;} " >xaa

      Will exit after you extracted the wanted part and is much faster for large files (well - only for the first splits - for the last parts it has to read through the file first).
      Last edited by pallevillesen; 12-07-2012, 12:48 AM. Reason: Added better awk solution

      Comment


      • #4
        Originally posted by ehlin View Post
        I haven't tried this, but would it be faster if you specified a file size rather than a line number?
        Haven't tried it, but wouldn't this result in truncated FASTQ entries, especially if you are doing it on compressed files to save time?

        Originally posted by pallevillesen View Post
        I really don't see anything faster than split (unless you want to parallelize it and let each subroutine extract certain parts of the file) (using e.g. awk).
        Thanks! Though... 2mins on a 16 Gb file? Tried splitting a 32Gb file and it took HOURS! There must have been something seriously wrong with our file system server...

        Comment


        • #5
          Originally posted by lorendarith View Post

          Any recommendations for faster splitting? awk, sed?

          Thanks!
          well it's just reading it into memory and then writing it back, it should be very fast yes you can use awk, how big you want your small files?

          you can do something like (bash syntax)

          Code:
          for  i in `seq 1 10`
          do
            awk -v v=$i '{if (NR>(v-1)*400000 && NR<=v*400000) print}' > $i.fastq 
          done
          That will break 1M read fastq file into ten 100K files.

          And it should be very quick, few minutes even for very big files.

          PS sorry - you already got the question answered, I'm still asleep apparently
          Last edited by apredeus; 12-08-2012, 10:32 AM.

          Comment


          • #6
            Originally posted by apredeus View Post
            PS sorry - you already got the question answered, I'm still asleep apparently
            ALL suggestions are welcomed and appreciated! Thanks

            Comment


            • #7
              You're welcome. I've just changed the code a bit, I messed up a variable name within awk.

              Comment


              • #8
                Originally posted by lorendarith View Post
                Haven't tried it, but wouldn't this result in truncated FASTQ entries, especially if you are doing it on compressed files to save time?

                Thanks! Though... 2mins on a 16 Gb file? Tried splitting a 32Gb file and it took HOURS! There must have been something seriously wrong with our file system server...
                Well... Our cluster is brand new with 80 Gbit network between nodes and the fileserver - that may cause things to run extremely fast here...

                Anyway: your problem was solved.

                Comment


                • #9
                  Originally posted by lorendarith View Post
                  Haven't tried it, but wouldn't this result in truncated FASTQ entries, especially if you are doing it on compressed files to save time?



                  Thanks! Though... 2mins on a 16 Gb file? Tried splitting a 32Gb file and it took HOURS! There must have been something seriously wrong with our file system server...
                  No local storage? NFS?

                  Comment


                  • #10
                    there should be a solution to just change the directory list, file names,
                    file sizes, while keeping the data where it is

                    Comment


                    • #11
                      Originally posted by gsgs View Post
                      there should be a solution to just change the directory list, file names,
                      file sizes, while keeping the data where it is
                      Sure, but reading a 32G file and maybe rewriting it (in chunks) is terribly slow via NFS ...

                      Comment

                      Latest Articles

                      Collapse

                      • seqadmin
                        Strategies for Sequencing Challenging Samples
                        by seqadmin


                        Despite advancements in sequencing platforms and related sample preparation technologies, certain sample types continue to present significant challenges that can compromise sequencing results. Pedro Echave, Senior Manager of the Global Business Segment at Revvity, explained that the success of a sequencing experiment ultimately depends on the amount and integrity of the nucleic acid template (RNA or DNA) obtained from a sample. “The better the quality of the nucleic acid isolated...
                        03-22-2024, 06:39 AM
                      • seqadmin
                        Techniques and Challenges in Conservation Genomics
                        by seqadmin



                        The field of conservation genomics centers on applying genomics technologies in support of conservation efforts and the preservation of biodiversity. This article features interviews with two researchers who showcase their innovative work and highlight the current state and future of conservation genomics.

                        Avian Conservation
                        Matthew DeSaix, a recent doctoral graduate from Kristen Ruegg’s lab at The University of Colorado, shared that most of his research...
                        03-08-2024, 10:41 AM

                      ad_right_rmr

                      Collapse

                      News

                      Collapse

                      Topics Statistics Last Post
                      Started by seqadmin, Yesterday, 06:37 PM
                      0 responses
                      8 views
                      0 likes
                      Last Post seqadmin  
                      Started by seqadmin, Yesterday, 06:07 PM
                      0 responses
                      8 views
                      0 likes
                      Last Post seqadmin  
                      Started by seqadmin, 03-22-2024, 10:03 AM
                      0 responses
                      49 views
                      0 likes
                      Last Post seqadmin  
                      Started by seqadmin, 03-21-2024, 07:32 AM
                      0 responses
                      66 views
                      0 likes
                      Last Post seqadmin  
                      Working...
                      X