# 101: Interactive jobs with Slurm

You can use `salloc` to allocate resources in real-time to run an interactive batch job. Typically this is used to allocate resources and spawn a shell. The shell is then used to execute `srun` commands to launch parallel tasks. Interactive job is useful for tasks including data exploration, development, or (with X11 forwarding) visualization activities. The maximum walltime depends on the [QoS](https://esciencecu-twiki.sc.chula.ac.th/slurm/slurm-qos-and-partition) you have used.

For worker nodes with CPU and GPU:

```
salloc --qos=cu_hpc --partition=cpugpu
```

For worker nodes with CPU-only:

```
salloc --qos=cu_hpc --partition=cpu
```

After connection, you will get the message like

```
salloc: Granted job allocation 82025
salloc: Waiting for resource configuration
salloc: Nodes cpu-bladeh-01 are ready for job
```

and with `squeue -u your_user_name`, you will see

```
JOBID PARTITION     NAME           USER ST       TIME  NODES NODELIST(REASON)
82025       cpu interact your_user_name  R       1:59      1 cpu-bladeh-01
```

To run, you can use `srun`, e.g.&#x20;

```
[your_user_name@frontend-02 ~]$ srun hostname
cpu-bladeh-01.stg
```

To exit the interactive mode, you can use the command `exit`

```
[your_user_name@frontend-02 ~]$ exit
exit
salloc: Relinquishing job allocation 82025
```
