Oscar
HomeServicesDocumentation
  • Overview
  • Quickstart
  • Getting Started
  • System Hardware
  • Account Information
  • Short "How to" Videos
  • Quick Reference
    • Common Acronyms and Terms
    • Managing Modules
    • Common Linux Commands
  • Getting Help
    • ❓FAQ
  • Citing CCV
  • CCV Account Information
  • Student Accounts
  • Offboarding
  • Connecting to Oscar
    • SSH (Terminal)
      • SSH Key Login (Passwordless SSH)
        • Mac/Linux/Windows(PowerShell)
        • Windows(PuTTY)
      • SSH Configuration File
      • X-Forwarding
      • SSH Agent Forwarding
        • Mac/Linux
        • Windows (PuTTY)
      • Arbiter2
    • Open OnDemand
      • Using File Explorer on OOD
      • Web-based Terminal App
      • Interactive Apps on OOD
      • Using Python or Conda environments in the Jupyter App
      • Using RStudio
      • Desktop App (VNC)
    • SMB (Local Mount)
    • Remote IDE (VS Code)
      • From Non-compliant Networks (2-FA)
      • Setup virtual environment and debugger
  • Managing files
    • Oscar's Filesystem
    • Transferring Files to and from Oscar
    • Transferring Files between Oscar and Campus File Storage (Replicated and Non-Replicated)
    • Resolving quota issues
      • Understanding Disk Quotas
    • Inspecting Disk Usage (Ncdu)
    • Restoring Deleted Files
    • Best Practices for I/O
    • Version Control
  • Submitting jobs
    • Running Jobs
    • Slurm Partitions
    • Interactive Jobs
    • Batch Jobs
    • Managing Jobs
    • Job Arrays
    • MPI Jobs
    • Condo/Priority Jobs
    • Dependent Jobs
    • Associations & Quality of Service (QOS)
  • GPU Computing
    • GPUs on Oscar
      • Grace Hopper GH200 GPUs
      • H100 NVL Tensor Core GPUs
      • Ampere Architecture GPUs
    • Submitting GPU Jobs
    • Intro to CUDA
    • Compiling CUDA
    • Installing Frameworks (PyTorch, TensorFlow, Jax)
      • Installing JAX
      • Installing TensorFlow
    • Mixing MPI and CUDA
  • Large Memory Computing
    • Large Memory Nodes on Oscar
  • Software
    • Software on Oscar
    • Using Modules
    • Migration of MPI Apps to Slurm 22.05.7
    • Python on Oscar
    • Python in batch jobs
    • Installing Python Packages
    • Installing R Packages
    • Using CCMake
    • Intro to Parallel Programming
    • Anaconda
    • Conda and Mamba
    • DMTCP
    • Screen
    • VASP
    • Gaussian
    • IDL
    • MPI4PY
  • Jupyter Notebooks/Labs
    • Jupyter Notebooks on Oscar
    • Jupyter Labs on Oscar
    • Tunneling into Jupyter with Windows
  • Debugging
    • Arm Forge
      • Configuring Remote Launch
      • Setting Job Submission Settings
  • MATLAB
    • Matlab GUI
    • Matlab Batch Jobs
    • Improving Performance and Memory Management
    • Parallel Matlab
  • Visualization 🕶
    • ParaView Remote Rendering
  • Singularity Containers
    • Intro to Apptainer
    • Building Images
    • Running Images
    • Accessing Oscar Filesystem
      • Example Container (TensorFlow)
    • Singularity Tips and Tricks
  • Installing Software Packages Locally
    • Installing your own version of Quantum Espresso
    • Installing your own version of Qmcpack
  • dbGaP
    • dbGaP Architecture
    • dbGaP Data Transfers
    • dbGaP Job Submission
  • RHEL9 Migration
    • RHEL-9 Migration
    • LMOD - New Module System
    • Module Changes
    • Testing Jupyter Notebooks on RHEL9 mini-cluster
  • Large Language Models
    • Ollama
Powered by GitBook
On this page
  • Partition Overview
  • Partition Details
  • batch
  • debug
  • vnc
  • gpu
  • gpu-he
  • gpu-debug
  • bigmem

Was this helpful?

Export as PDF
  1. Submitting jobs

Slurm Partitions

PreviousRunning JobsNextInteractive Jobs

Last updated 1 year ago

Was this helpful?

Partition Overview

Oscar has the following partitions. The number and size of jobs allowed on Oscar vary with both partition and type of user account. You can email support@ccv.brown.edu if you need advice on which partitions to use‌.

To list partitions on Oscar available to your account, run the following command:

$ sinfo -O "partition"     

To view all partitions (including ones you don't have access to), replace the -O in the command above with -aO.

Name

Purpose

batch

general purpose computing

debug

short wait time, short run time partition for debugging

vnc

graphical desktop environment

gpu

GPU nodes

gpu-he

High End GPU nodes

gpu-debug

short wait time, short run time partition for gpu debugging

bigmem

large memory nodes

batch is the default partition.

Partition Details

Below are brief summary of partitions. For the details of nodes in partitions, please see .

batch

  • General purpose computing

  • Priority is determined by account type (from highest

    to lowest: condo, priority, exploratory)

Condo limits apply to the group (i.e., they reflect the sum of all users on the condo). Condo users can check the limits on their condo with the command condos.

There is no limit on the time for condo jobs, but users should be aware that planned maintenance on the machine may occur (one month’s notice is given prior to any planned maintenance).‌

debug

  • Short wait time, short run time access for debugging

  • All users have the same limits and priority on the debug partition

vnc

  • These nodes are for running VNC sessions/jobs

  • Account type may affect Priority

gpu

  • For GPU-based jobs

  • GPU Priority users get higher priority and more resources than free users on the GPU partition

  • Condo users submit to the gpu partition with normal or priority access (if they have a priority account in addition to their condo)

gpu-he

  • For GPU-based jobs

  • Uses Tesla V100 GPUs

  • Restricted to High End GPU Priority users

gpu-debug

  • Short wait time, short run time gpu access for debugging

  • All users have the same limits and priority on the gpu-debug partition

bigmem

  • For jobs requiring large amounts of memory

  • Priority users get higher priority and more resources than free users on the bigmem partition

  • Condo users submit to the bigmem partition with normal or priority access (if they have a priority account in addition to their condo)

  • Premium users get higher priority and more resources than free users on the SMP partition

  • Condo users submit to the SMP partition with normal or priority access (if they have a priority account in addition to their condo)

slurm
here