Oscar
HomeServicesDocumentation
  • Overview
  • Quickstart
  • Getting Started
  • System Hardware
  • Account Information
  • Short "How to" Videos
  • Quick Reference
    • Common Acronyms and Terms
    • Managing Modules
    • Common Linux Commands
  • Getting Help
    • ❓FAQ
  • Citing CCV
  • CCV Account Information
  • Student Accounts
  • Offboarding
  • Connecting to Oscar
    • SSH (Terminal)
      • SSH Key Login (Passwordless SSH)
        • Mac/Linux/Windows(PowerShell)
        • Windows(PuTTY)
      • SSH Configuration File
      • X-Forwarding
      • SSH Agent Forwarding
        • Mac/Linux
        • Windows (PuTTY)
      • Arbiter2
    • Open OnDemand
      • Using File Explorer on OOD
      • Web-based Terminal App
      • Interactive Apps on OOD
      • Using Python or Conda environments in the Jupyter App
      • Using RStudio
      • Desktop App (VNC)
    • SMB (Local Mount)
    • Remote IDE (VS Code)
      • From Non-compliant Networks (2-FA)
      • Setup virtual environment and debugger
  • Managing files
    • Oscar's Filesystem
    • Transferring Files to and from Oscar
    • Transferring Files between Oscar and Campus File Storage (Replicated and Non-Replicated)
    • Resolving quota issues
      • Understanding Disk Quotas
    • Inspecting Disk Usage (Ncdu)
    • Restoring Deleted Files
    • Best Practices for I/O
    • Version Control
  • Submitting jobs
    • Running Jobs
    • Slurm Partitions
    • Interactive Jobs
    • Batch Jobs
    • Managing Jobs
    • Job Arrays
    • MPI Jobs
    • Condo/Priority Jobs
    • Dependent Jobs
    • Associations & Quality of Service (QOS)
  • GPU Computing
    • GPUs on Oscar
      • Grace Hopper GH200 GPUs
      • H100 NVL Tensor Core GPUs
      • Ampere Architecture GPUs
    • Submitting GPU Jobs
    • Intro to CUDA
    • Compiling CUDA
    • Installing Frameworks (PyTorch, TensorFlow, Jax)
      • Installing JAX
      • Installing TensorFlow
    • Mixing MPI and CUDA
  • Large Memory Computing
    • Large Memory Nodes on Oscar
  • Software
    • Software on Oscar
    • Using Modules
    • Migration of MPI Apps to Slurm 22.05.7
    • Python on Oscar
    • Python in batch jobs
    • Installing Python Packages
    • Installing R Packages
    • Using CCMake
    • Intro to Parallel Programming
    • Anaconda
    • Conda and Mamba
    • DMTCP
    • Screen
    • VASP
    • Gaussian
    • IDL
    • MPI4PY
  • Jupyter Notebooks/Labs
    • Jupyter Notebooks on Oscar
    • Jupyter Labs on Oscar
    • Tunneling into Jupyter with Windows
  • Debugging
    • Arm Forge
      • Configuring Remote Launch
      • Setting Job Submission Settings
  • MATLAB
    • Matlab GUI
    • Matlab Batch Jobs
    • Improving Performance and Memory Management
    • Parallel Matlab
  • Visualization 🕶
    • ParaView Remote Rendering
  • Singularity Containers
    • Intro to Apptainer
    • Building Images
    • Running Images
    • Accessing Oscar Filesystem
      • Example Container (TensorFlow)
    • Singularity Tips and Tricks
  • Installing Software Packages Locally
    • Installing your own version of Quantum Espresso
    • Installing your own version of Qmcpack
  • dbGaP
    • dbGaP Architecture
    • dbGaP Data Transfers
    • dbGaP Job Submission
  • RHEL9 Migration
    • RHEL-9 Migration
    • LMOD - New Module System
    • Module Changes
    • Testing Jupyter Notebooks on RHEL9 mini-cluster
  • Large Language Models
    • Ollama
Powered by GitBook
On this page
  • Oscar Specifications
  • Compute Nodes
  • Hardware details
  • GPU Features and GPU Memory

Was this helpful?

Export as PDF

System Hardware

Oscar Specifications

Compute Nodes

388

Total CPU Cores

20176

GPU Nodes

82

Total GPUs

667

Large Memory Nodes

6

Compute Nodes

Oscar has compute nodes in the partitions listed below.

  • batch - The batch partition is for programs/jobs which need neither GPUs nor large memory.

  • bigmem - The bigmem partition is for programs/jobs which require large memory.

  • debug - The debug partition is for users to debug programs/jobs.

  • gpu - The gpu partition is for programs/jobs which require GPUs.

  • gpu-debug - The gpu-debug partition is for users to debug gpu programs/jobs.

  • gpu-he -The gpu-he partition is for programs/jobs which need to access high-end GPUs.

  • vnc - The vnc partition is for users to run programs/jobs in an graphical desktop environment.

Below are node details including cores and memory for all partitions.

Partition

Total Nodes

Total Cores

Cores Per Node

Total GPUs

Memory Per Node (GB)

batch

288

12800

24-192

n/a

190-1540

bigmem

6

512

32-192

n/a

770-1540

gpu

64

5000

24-128

519

190-1028

gpu-he

12

552

24-64

84

190-1028

debug

2

96

48

n/a

382

gpu-debug

1

48

48

8

1028

vnc

303

13696

24-192

40

102-1540

viz

1

48

48

8

1028

Hardware details

Hardware details for all partitions. The Features column shows the features available for the --constraint option for SLURM. This includes the available CPU types as well GPUs.

Partition
Nodes
CPUS/ Node
Total CPUs
GPUs/ Node
Total GPUs
Memory (GB)
Features

batch

100

32

3200

n/a

n/a

190

32core, intel, scalable, cascade, edr

batch

122

48

5856

n/a

n/a

382

48core, intel, cascade, edr

batch

40

32

1280

n/a

n/a

382

32core, intel, scalable, cascade, edr, cifs

batch

10

192

1920

n/a

n/a

1540

192core, amd, genoa, edr

batch

4

64

256

n/a

n/a

514

64core, intel, icelake, edr

batch

2

24

48

n/a

n/a

770

24core, intel, e5-2670, e5-2600, scalable, skylake, fdr

batch

10

24

240

n/a

n/a

382

24core, intel, e5-2670, e5-2600, scalable, skylake, fdr

bigmem

4

32

128

n/a

n/a

770

32core, intel, scalable, cascade, edr

bigmem

2

192

384

n/a

n/a

1540

192core, amd, genoa, edr

gpu

2

32

64

5

10

382

intel, gpu, titanrtx, turing, skylake, 6142

gpu

1

24

24

5

5

190

intel, gpu, titanrtx, turing, skylake, 6142

gpu

1

48

48

10

10

382

intel, gpu, quadrortx, turing, cascade

gpu

10

32

320

10

100

382

intel, gpu, quadrortx, turing, cascade

gpu

13

64

832

8

104

1028

amd, gpu, geforce3090, ampere

gpu

4

48

192

8

32

1028

amd, gpu, geforce3090, ampere

gpu

7

128

896

8

56

1028

amd, cifs, gpu, a5500, ampere

gpu

10

64

640

8

80

1028

amd, cifs, gpu, a5000, ampere

gpu

10

128

1280

8

80

1028

amd, cifs, gpu, a5000, ampere

gpu

1

64

64

2

2

1028

amd, gpu, a5000, ampere

gpu

2

128

256

8

16

1028

amd, gpu, a5500, cifs, ampere

gpu

3

128

384

8

24

1028

amd, gpu, cifs, a5000, ampere

gpu-he

3

48

144

8

24

1028

amd, gpu, a40, ampere

gpu-he

3

24

72

4

12

190

intel, gpu, 4gpu, v100, volta, skylake, 6126

gpu-he

4

64

256

8

32

1028

amd, gpu, a6000, ampere

gpu-he

2

40

80

8

16

512

intel, cifs, gpu, v100, volta, haswell

debug

2

48

96

n/a

n/a

382

48core, intel, cascade, edr

gpu-debug

1

48

48

8

8

1028

amd, gpu, geforce3090, ampere

vnc

100

32

3200

n/a

n/a

190

32core, intel, scalable, cascade, edr

vnc

134

48

6432

n/a

n/a

382

48core, intel, cascade, edr

vnc

1

64

64

8

8

1028

amd, cifs, gpu, a5000, ampere

vnc

2

128

256

16

32

102

amd, gpu, a2, ampere

vnc

40

32

1280

n/a

n/a

382

32core, intel, scalable, cascade, edr, cifs

vnc

10

192

1920

n/a

n/a

1540

192core, amd, genoa, edr

vnc

4

64

256

n/a

n/a

514

64core, intel, icelake, edr

vnc

2

24

48

n/a

n/a

770

24core, intel, e5-2670, e5-2600, scalable, skylake, fdr

vnc

10

24

240

n/a

n/a

382

24core, intel, e5-2670, e5-2600, scalable, skylake, fdr

viz

1

48

48

8

8

1028

amd, gpu, geforce3090, ampere

GPU Features and GPU Memory

GPU Features
GPU Memory

a6000

48 GB

a40

45 GB

v100

32 GB

a5000

24 GB

quadrortx

24 GB

titanrtx

24 GB

geforce3090

24 GB

p100

12 GB

titanv

12 GB

1000ti

11 GB

PreviousGetting StartedNextAccount Information

Last updated 9 months ago

Was this helpful?