View Single Post
Old 12-05-2013, 06:18 AM   #3
johanneskoester
Junior Member
 
Location: Germany

Join Date: Sep 2011
Posts: 2
Default

Hi,
sure. The solution you provide runs a for loop and spawns a bash job for each fastq.
So, they will execute in parallel, all fine.
When using Snakemake, you would have a similar effect at first sight. However, there are various advantages (some of them, but to the best of my knowledge not all, are also provided by other workflow systems):

With Snakemake, you can define how many processes should be active at the same time, so that your machine is not flooded with jobs. Snakemake will schedule them in a way such that the utilization of the provided cores will be maximized. The scheduling is also aware of the number of threads each job uses.
If a job fails, or you have to quit the execution, on the next invokation, Snakemake will determine what was already computed last time and only calculate the missing stuff.
If an input file happens to be changed, Snakemake will propose to rerun the subsequent part of the pipeline automatically (in other words, Snakemake automatically detects if one of your files is outdated).
You can run the very same workflow definition on a single machine or a cluster, without the need to redefine anything in the Snakefile.
Following the well known pattern of input-output-code, the Snakemake rules are very easy to read, and help to separate your commands from the parameters.
For each of the output files created during the workflow, Snakemake will store metadata like used parameters, commands, and input files etc. which is nice for documentation.

Best,
Johannes
johanneskoester is offline   Reply With Quote