Dask on HPC
Sep 26, 2019Recently I saw that Dask, a distributed Python library, created some really handy wrappers for running Dask projects on a High-Performance Computing Cluster, HPC.
Most people who use HPC are pretty well versed in technologies like MPI, and just generally abusing multiple compute nodes all at once, but I think technologies like Dask are really going to be game-changers in the way we all work. Because really, who wants to write MPI code or vectorize?
If you've never heard of Dask and it's awesomeness before, I think the easiest way to get started is to look at their Embarrassingly Parallel Example, and don't listen to the haters who think speeding up for loops is lame. It's a superpower!
Onward with examples!
Client and Scheduler
Firstly, these are all pretty much borrowed from the Dask Job Queues page. Pretty much, what you do, is you write your Python code as usual. Then, when you need to scale across nodes you leverage your HPC scheduler to get you some nodes.
In any distributed software you have somewhere that creates jobs or tasks, normally a scheduler, and something that actually executes them. Dask calls the part that executes the jobs the client, and the scheduler. The client does the fun stuff too, like saying how many cores we can use/take advantage of.
When using Dask JobQueue drop in the configuration for your scheduler instead, as shown in Dask JobQueue - How this works.
Then, when it's time for your dask task to execute it is submitted as just a regular job script, with your usual #SBATCH or whatever the PBS equivalent of that is.
Minimal Example
This is, once again, mostly taken directly from the docs, but I had a tough time finding it amongst all the awesomeness. I removed a little, just for the sake of having a truly minimal example.
And that's it! There are still some things I'm digging into, such as the client.start_workers call, and trying to see how I can get support for job arrays in Slurm going, but besides that I'm pretty happy! It's nice to see some of the newer techs still taking on HPC, since many of us are still (willingly or not!) working in a HPC environment.
In the future I'm going to write a Part 2, where I really dive into the nitty gritty details of this library.
Happy teching!