GPUs on Oscar

To view the various GPUs available on Oscar, use the command

nodes gpu

Interactive Use

To start an interactive session on a GPU node, use the interact command and specify the gpu partition. You also need to specify the requested number of GPUs using the -g option:

$ interact -q gpu -g 1

To start an interactive session on a particular GPU type (QuadroRTX, 1080ti, p100 etc) use the feature -f option:

interact -q gpu -f quadrortx

GPU Batch Job

For production runs, please submit a batch job to the gpu partition. E.g. for using 1 GPU:

$ sbatch -p gpu --gres=gpu:1 <jobscript>

This can also be mentioned inside the batch script:

#SBATCH -p gpu --gres=gpu:1

You can view the status of the gpu partition with:

$ allq gpu

Sample batch script for CUDA program:

~/batch_scripts/cuda.sh

Getting started with GPUs

While you can program GPUs directly with CUDA, a language and runtime library from NVIDIA, this can be daunting for programmers who do not have experience with C or with the details of computer architecture.

You may find the easiest way to tap the computation power of GPUs is to link your existing CPU program against numerical libraries that target the GPU:

  • CUBLAS is a drop-in replacement for BLAS libraries that runs BLAS routines on the GPU instead of the CPU.

  • CULA is a similar library for LAPACK routines.

  • CUFFT, CUSPARSE, and CURAND provide FFT, sparse matrix, and random number generation routines that run on the GPU.

  • MAGMA combines custom GPU kernels, CUBLAS, and a CPU BLAS library to use both the GPU and CPU to simultaneously use both the GPU and CPU; it is available in the 'magma' module on Oscar.

  • Matlab has a GPUArray feature, available through the Parallel Computing Toolkit, for creating arrays on the GPU and operating on them with many built-in Matlab functions. The PCT toolkit is licensed by CIS and is available to any Matlab session running on Oscar or workstations on the Brown campus network.

  • PyCUDA is an interface to CUDA from Python. It also has a GPUArray feature and is available in the cuda module on Oscar.

OpenACC

OpenACC is a portable, directive-based parallel programming construct. You can parallelize loops and code segments simply by inserting directives - which are ignored as comments if OpenACC is not enabled while compiling. It works on CPUs as well as GPUs. We have the PGI compiler suite installed on Oscar which has support for compiling OpenACC directives. To get you started with OpenACC:

MATLAB

GPU Programming in Matlab

NVLink enables GPUs to pool memory over high speed links (25 G/s). This will increase performance of your application code.

Nodes gpu[1210,1211,1212]have 4 fully connected NVLink (SXM2) V100 GPUs.

To submit interactive job to NVLink Enabled GPU nodes:

interact -q gpu -f v100

To submit batch job(s) add following line to your batch script.

#SBATCH --constraint=v100

Last updated