SEQanswers

Go Back   SEQanswers > Bioinformatics > Bioinformatics



Similar Threads
Thread Thread Starter Forum Replies Last Post
Which bioinformatic workflow management system to use? moritz1 Bioinformatics 4 02-26-2016 05:40 AM
SAMMate 2.4 release xuguorong Bioinformatics 7 11-19-2010 10:49 AM
Galaxy workflow management system customization - avoiding duplication of files tooony13 Bioinformatics 2 11-16-2010 10:57 AM
New release of SeqMonk (v0.8) simonandrews Bioinformatics 0 01-22-2010 06:53 AM
TopHat 1.0.12 release Cole Trapnell Bioinformatics 1 10-29-2009 01:09 PM

Reply
 
Thread Tools
Old 12-05-2013, 03:24 AM   #1
johanneskoester
Junior Member
 
Location: Germany

Join Date: Sep 2011
Posts: 2
Default A new release of the Snakemake workflow system

Hi guys,
I would like to announce version 2.4.8 of Snakemake.
Snakemake is a pythonic text-based workflow system with a clean and easy to read language for defining your workflows. Snakemake is inspired by GNU Make. Workflows are defined by rules, that generate output files from input files. Rule dependencies and parallelization are automatically determined by Snakemake.
In contrast to GNU Make, Snakemake allows to have multiple output files in a rule. Further, rules can use shell commands, Python, or R code. Snakemake provides many additional useful features like resource-aware scheduling, parameter- and version-tracking and detection of incomplete files.
Finally, Snakemake has a generic cluster support, that works with any cluster or batch system that provides a qsub-like command and a shared filesystem.

To give you an impression, this is how Snakemake rules look like:
Code:
rule targets:
    input:  'plots/dataset1.pdf', 
            'plots/dataset2.pdf'

rule plot:
    input:  'raw/{dataset}.csv'
    output: 'plots/{dataset}.pdf'
    shell:  'somecommand {input} {output}'
If you like Snakemake, please feel free to visit http://bitbucket.org/johanneskoester/snakemake.
johanneskoester is offline   Reply With Quote
Old 12-05-2013, 05:50 AM   #2
dariober
Senior Member
 
Location: Cambridge, UK

Join Date: May 2010
Posts: 311
Default

Hi johanneskoester,

I'm curious about learning more about snakemake, thanks for positing it!

I'm quite familiar with python, R, bash but I'm not familiar at all with GNU Make and friends (other than executing it when I get some source code). So I must admit I fail to see where the advantage comes when building bioinformatics pipelines.

For example, following this example on snakemake, this is how I would implement the same pipeline:

Code:
REF="/global/home/users/ebolotin/scratch/hg19/hg19"
for fq in *.fastq.gz
do
    bname=`basename $fq .fastq.gz`
    ## Prepare pipeline
    echo "cutadapt  -m 10 -a AGATCGGAAGAGCACACGTCTGAACTCC -o ${bname}.cut $fq &&
    bowtie2 -p 20 --very-sensitive -x $REF -U ${bname}.cut -S ${bname}.sam &&
    makeTagDirectory ${bname}.tag ${bname}.sam -keepAll -genome hg19" > ${bname}.sh
    ## Run the job:
    bash ${bname}.sh
    # OR
    # nohup ${bname}.sh &
    # OR
    # bsub [opts] < ${bname}.sh
done
Could you point out in what respect snakemake would make it preferable?

Thanks!
Dario
dariober is offline   Reply With Quote
Old 12-05-2013, 07:18 AM   #3
johanneskoester
Junior Member
 
Location: Germany

Join Date: Sep 2011
Posts: 2
Default

Hi,
sure. The solution you provide runs a for loop and spawns a bash job for each fastq.
So, they will execute in parallel, all fine.
When using Snakemake, you would have a similar effect at first sight. However, there are various advantages (some of them, but to the best of my knowledge not all, are also provided by other workflow systems):

With Snakemake, you can define how many processes should be active at the same time, so that your machine is not flooded with jobs. Snakemake will schedule them in a way such that the utilization of the provided cores will be maximized. The scheduling is also aware of the number of threads each job uses.
If a job fails, or you have to quit the execution, on the next invokation, Snakemake will determine what was already computed last time and only calculate the missing stuff.
If an input file happens to be changed, Snakemake will propose to rerun the subsequent part of the pipeline automatically (in other words, Snakemake automatically detects if one of your files is outdated).
You can run the very same workflow definition on a single machine or a cluster, without the need to redefine anything in the Snakefile.
Following the well known pattern of input-output-code, the Snakemake rules are very easy to read, and help to separate your commands from the parameters.
For each of the output files created during the workflow, Snakemake will store metadata like used parameters, commands, and input files etc. which is nice for documentation.

Best,
Johannes
johanneskoester is offline   Reply With Quote
Old 11-02-2018, 02:29 AM   #4
Physalia-courses
Member
 
Location: Berlin

Join Date: May 2017
Posts: 13
Default

Interested in learning more about #SNAKEMAKE?

Register now for the first 2-day #SNAKEMAKE Workshop in Berlin with Johannes Köster https://johanneskoester.bitbucket.io/

https://www.physalia-courses.org/cou...hops/course41/

You will learn how to create modern and reproducible #bioinformatic workflows
Physalia-courses is offline   Reply With Quote
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off




All times are GMT -8. The time now is 06:28 AM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2018, vBulletin Solutions, Inc.
Single Sign On provided by vBSSO