Example Container (TensorFlow)
There are multiple ways to install and run TensorFlow. Our recommended approach is via NGC containers. The containers are available via NGC Registry. In this example we will pull TensorFlow NGC container
Build the container:
This process will take some time, and once it completes, you should see a .simg
file.
Working with Apptainer images requires a significant amount of storage space. By default, Apptainer will use ~/.apptainer
as a cache directory, which may exceed your home quota. You can set temporary directories as follows:
Once the container is ready, request an interactive session with a GPU:
To run a container with GPU support:
the --nv flag is important. As it enables the NVIDA sub-system
Or, if you're executing a specific command inside the container:
Make sure your Tensorflow image is able to detect GPUs
If you need to install additional custom packages, note that the containers themselves are non-writable. However, you can use the
--user
flag to install packages inside.local
. For example:
Slurm Script:
Here's how you can submit a SLURM job script using the srun
command to run your container. Below is a basic example:
Last updated