CU e-Science HTC-HPC Cluster and Cloud
  • CU e-Science HTC-HPC Cluster and Cloud
  • Introduction to our cluster
    • Our resources
    • Registration
    • Login to our cluster
    • Disk space
    • Acknowledgement and Publication
  • Slurm
    • 101: How to submit Slurm batch jobs
    • 101: Interactive jobs with Slurm
    • Basic Slurm commands
    • QoS and Partition
    • Job priority
    • Available complier and software
    • Examples
      • Simple C, C++, Python program
      • Python with VirtualEnv
      • How do I submit a large number of very similar jobs?
      • CMSSW
      • R
      • Mathematica
      • Message Passing Interface (MPI)
  • Kubenetes
    • Under construction!
Powered by GitBook
On this page

Was this helpful?

  1. Slurm

QoS and Partition

Partitions

Partition

Nodes

Allow QoS

cpugpu

Lenovo SR630 with Tesla T4

cu_hpc, cu_htc, cu_long, cu_student, escience

cpu

Lenovo x3850 X6, Lenovo SR850

cu_hpc, cu_htc, cu_long, cu_student, escience

math

IBM iDataPlex DX360M4

cu_math

profiling

Lenovo SR635 with Tesla T4 and A2

cu_profile

dgx

DGX station

cu_hpc

test

IBM BladeCenter HS22

cu_hpc, cu_htc, cu_long, cu_student, escience

Quality of Service (QoS) Jobs request a QOS using the "--qos=" option to the sbatch, salloc, and srun commands.

QoS

Max nodes per user

Max jobs per user

Max CPU per user

Max memory (GB) per user

Max walltime (Day-HH:MM:SS)

cu_hpc

8

20

128

512

14-00:00:00

cu_htc

1

100

128

256

30-00:00:00

cu_long

4

10

128

512

30-00:00:00

cu_student

2

10

16

64

7-00:00:00

cu_math

2

2

16

120

30-00:00:00

escience

1

10

16

64

7-00:00:00

cu_cms

cu_profiling

PreviousBasic Slurm commandsNextJob priority

Last updated 4 months ago

Was this helpful?