The cluster has nodes with nvidia GPUs available. Here is some basic usage info. Note that, at the moment, access to the nodes must be requested by emailing the cluster admin. Please also feel free to also email the cluster admin with tips and tricks as you discover them.
Compiling GPU applications
The CUDA development toolkit is installed on the interactive nodes. GPU code can be compiled there.
The toolkit is installed in /usr/local/cuda
Example compiler directives include -I/usr/local/cuda/include -L/usr/local/cuda/lib64
Running GPU applications
GPUs are configured as a requestable, consumable resource (just like memory). To ensure that applications run on a node with an available GPU, you must include the following in your submission script:
where N is the number of GPUs your code will use. You can see how many GPUs the nodes have here.