I suppose many have this problem: how do you efficiently utilize available CPUs, disks and memory while simultaneously running many (possibly large memory) jobs? Is there a shell based tool that can take a list of work to do and dispatch the jobs (or parts of the jobs) as resources become available? I'm sure many have just cobbled their own solution. But there must be a real, robust, open source, batch manager somewhere. It'd be great if you could specify things like number of CPUs to use, be "nice" when others are doing jobs but be a hog when nobody else is doing anything, tag jobs to only run one instance because it might use too much memory, split temp files among disks, etc. Unix/Linux would be fine but operating system independent would be okay.
Any ideas?
Any ideas?
Comment