Oscar
HomeServicesDocumentation
  • Overview
  • Quickstart
  • Getting Started
  • System Hardware
  • Account Information
  • Short "How to" Videos
  • Quick Reference
    • Common Acronyms and Terms
    • Managing Modules
    • Common Linux Commands
  • Getting Help
    • ❓FAQ
  • Citing CCV
  • CCV Account Information
  • Student Accounts
  • Offboarding
  • Connecting to Oscar
    • SSH (Terminal)
      • SSH Key Login (Passwordless SSH)
        • Mac/Linux/Windows(PowerShell)
        • Windows(PuTTY)
      • SSH Configuration File
      • X-Forwarding
      • SSH Agent Forwarding
        • Mac/Linux
        • Windows (PuTTY)
      • Arbiter2
    • Open OnDemand
      • Using File Explorer on OOD
      • Web-based Terminal App
      • Interactive Apps on OOD
      • Using Python or Conda environments in the Jupyter App
      • Using RStudio
      • Desktop App (VNC)
    • SMB (Local Mount)
    • Remote IDE (VS Code)
      • From Non-compliant Networks (2-FA)
      • Setup virtual environment and debugger
  • Managing files
    • Oscar's Filesystem
    • Transferring Files to and from Oscar
    • Transferring Files between Oscar and Campus File Storage (Replicated and Non-Replicated)
    • Resolving quota issues
      • Understanding Disk Quotas
    • Inspecting Disk Usage (Ncdu)
    • Restoring Deleted Files
    • Best Practices for I/O
    • Version Control
  • Submitting jobs
    • Running Jobs
    • Slurm Partitions
    • Interactive Jobs
    • Batch Jobs
    • Managing Jobs
    • Job Arrays
    • MPI Jobs
    • Condo/Priority Jobs
    • Dependent Jobs
    • Associations & Quality of Service (QOS)
  • GPU Computing
    • GPUs on Oscar
      • Grace Hopper GH200 GPUs
      • H100 NVL Tensor Core GPUs
      • Ampere Architecture GPUs
    • Submitting GPU Jobs
    • Intro to CUDA
    • Compiling CUDA
    • Installing Frameworks (PyTorch, TensorFlow, Jax)
      • Installing JAX
      • Installing TensorFlow
    • Mixing MPI and CUDA
  • Large Memory Computing
    • Large Memory Nodes on Oscar
  • Software
    • Software on Oscar
    • Using Modules
    • Migration of MPI Apps to Slurm 22.05.7
    • Python on Oscar
    • Python in batch jobs
    • Installing Python Packages
    • Installing R Packages
    • Using CCMake
    • Intro to Parallel Programming
    • Anaconda
    • Conda and Mamba
    • DMTCP
    • Screen
    • VASP
    • Gaussian
    • IDL
    • MPI4PY
  • Jupyter Notebooks/Labs
    • Jupyter Notebooks on Oscar
    • Jupyter Labs on Oscar
    • Tunneling into Jupyter with Windows
  • Debugging
    • Arm Forge
      • Configuring Remote Launch
      • Setting Job Submission Settings
  • MATLAB
    • Matlab GUI
    • Matlab Batch Jobs
    • Improving Performance and Memory Management
    • Parallel Matlab
  • Visualization 🕶
    • ParaView Remote Rendering
  • Singularity Containers
    • Intro to Apptainer
    • Building Images
    • Running Images
    • Accessing Oscar Filesystem
      • Example Container (TensorFlow)
    • Singularity Tips and Tricks
  • Installing Software Packages Locally
    • Installing your own version of Quantum Espresso
    • Installing your own version of Qmcpack
  • dbGaP
    • dbGaP Architecture
    • dbGaP Data Transfers
    • dbGaP Job Submission
  • RHEL9 Migration
    • RHEL-9 Migration
    • LMOD - New Module System
    • Module Changes
    • Testing Jupyter Notebooks on RHEL9 mini-cluster
  • Large Language Models
    • Ollama
Powered by GitBook
On this page
  • Singularity Shell
  • Singularity Execute Instructions
  • Run Image Instructions

Was this helpful?

Export as PDF
  1. Singularity Containers

Running Images

There are three main ways to run a Singularity image on OSCAR. As an interactive shell, using the exec command, or building a runscript and using the run command.

PreviousBuilding ImagesNextAccessing Oscar Filesystem

Last updated 1 year ago

Was this helpful?

The most likely usage of your singularity environment will be via utilizing either or . These can be directly executed via a batch script or through an interactive job in place of your traditional execution scripts. For more information about the methods of running a singularity image, the guide is pretty handy.

Treat the running of a singularity container like any other executable or codebase on OSCAR. Do not run singularity containers directly on the login nodes. Instead, they should be run either via an interactive or batch job, or via the terminal within a VNC session.

Singularity Shell

This will launch an interactive shell within a singularity instance based on the designated image. This should be used when you are testing/debugging the image or intend to use it via the interact/VNC methods.

$ singularity shell <imagePath>

This method is only applicable when working with an interact slurm job, or via the terminal within a VNC session.

Singularity Execute Instructions

The next method is to launch the image with a defined set of instructions. This will launch the singularity image, and execute whatever commands are defined by the user.

$ singularity exec <imagePath> <commands>

Here, the commands can range from running a script, loading modules, or piping multiple instructions together. To see an example of this process, see the Example - Tensorflow section where we execute a script to run within the singularity image.

Run Image Instructions

The last method we will go over here is via singularities run command, which will execute a series of instructions provided to the image in the form of a runscript. This script will automatically execute if the image is either launch using the run command, or if the singularity image is directly executed.

$ singularity run <imagePath>

or

./<imagePath>

In both cases, we are executing the container’s “runscript” (the executable /singularity at the root of the image).

For more information about singularity run and the associated runscript, we recommend the run documentation using singularity run help or visiting the documentation.

singularity exec
singularity run
Singularity quickstart
singularity run