You can also use other shells if want, not limited to bash. See an example of tcsh/csh in the CMSSW example.
To submit the job, you use sbatch
sbatch example1.slurm
You will see
Submitted batch job 81942
To check if your job is in which state
squeue -u your_user_name
In the ST column, R is Running, PD is pending.
Your output should look like
==========================================
SLURM_JOB_ID = 81943
SLURM_NODELIST = cpu-bladeh-01
==========================================
cpu-bladeh-01.stg
Linux cpu-bladeh-01.stg 3.10.0-1127.el7.x86_64 #1 SMP Tue Mar 31 23:36:51 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
model name : Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz
16
pwd = /work/home/your_user_name/slurm/example1
TMPDIR = /work/scratch/your_user_name/81943
SLURM_SUBMIT_DIR = /work/scratch/your_user_name/81943
SLURM_JOBID = 81943
Hello World
With the Slurm output, you see that your job is running on the same directory that you submit the job (e.g. /work/home/your_user_name/slurm/example1. This is not recommended. You should move the job to run on $TMPDIR (or $SLURM_SUBMIT_DIR) and copy the output back when the job is done. Here is an example of modified example1.slurm to run on $TMPDIR and copy test.log (output of python script) back to your submission directory. The $TMPDIR will be deleted automatically after the job is done.
#!/bin/bash##SBATCH --qos=cu_hpc#SBATCH --partition=cpu#SBATCH --job-name=example1#SBATCH --output=example1.txt#SBATCH --nodes=1#SBATCH --ntasks=1#SBATCH --time=00:10:00#SBATCH --cpus-per-task=1#SBATCH --mem-per-cpu=1Gmodulepurge#To get worker node informationhostnameuname-amore/proc/cpuinfo|grep"model name"|head-1more/proc/cpuinfo|grep"processor"|wc-l#To set your submission directoryecho"pwd = "`pwd`export MYCODEDIR=`pwd`#Check PATHsecho"MYCODEDIR = "$MYCODEDIRecho"TMPDIR = "$TMPDIRecho"SLURM_SUBMIT_DIR = "$SLURM_SUBMIT_DIRecho"SLURM_JOBID = "$SLURM_JOBID#Move to TMPDIR and run python scriptcpexample1.py $TMPDIRcd $TMPDIRpythonexample1.py>|test.logls-lcp-rftest.log $MYCODEDIR/