CU e-Science HTC-HPC Cluster and Cloud
  • CU e-Science HTC-HPC Cluster and Cloud
  • Introduction to our cluster
    • Our resources
    • Registration
    • Login to our cluster
    • Disk space
    • Acknowledgement and Publication
  • Slurm
    • 101: How to submit Slurm batch jobs
    • 101: Interactive jobs with Slurm
    • Basic Slurm commands
    • QoS and Partition
    • Job priority
    • Available complier and software
    • Examples
      • Simple C, C++, Python program
      • Python with VirtualEnv
      • How do I submit a large number of very similar jobs?
      • CMSSW
      • R
      • Mathematica
      • Message Passing Interface (MPI)
  • Kubenetes
    • Under construction!
Powered by GitBook
On this page
  • Slrum
  • Kubenetes

Was this helpful?

  1. Introduction to our cluster

Login to our cluster

PreviousRegistrationNextDisk space

Last updated 3 years ago

Was this helpful?

Slrum

To log in to the frontend, the ssh client program can be used, and the user types Command Line Interface (CLI) commands to tell the computer what to do. The ssh client program is available on Linux, MacOS (with Terminal) and MS Windows 10 machines (with PowerShell). For older Microsoft Windows machines, the ssh client is recommended.

ssh your_user_name@escience0.sc.chula.ac.th

Note that escience0 is the load-balancing IP address. It should be fine to use for any job submissions. However, if you would like to compile your code with specific hardware, i.e. GPU, you should log in to a specific machine. We don't recommend using the login machines to run your code if not necessary, jobs will be killed if it consumes a lot of resources of the frontend nodes.

  • escience1.sc.chula.ac.th: small frontend machine, for job submission, monitoring only.

  • escience2.sc.chula.ac.th: small frontend machine, for job submission, monitoring only.

  • escience3.sc.chula.ac.th: frontend with Tesla G4 CPU.

Kubenetes

Under construction.

PuTTY