SABARMATI

Home

SABARMATI CLUSTER DETAILS

Name of the HPC cluster

Sabarmati

Fully Qualified Domain Name

sabarmati.iitgn.ac.in

IP Address of HPC cluster

10.0.138.33

Make

Fujitsu

Usable Storage

~60TB

Total CPU

 200 cores Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz

Total Compute nodes

5

Job Scheduler

SLURM 18.08.7

User level Quota

100 GB per user in the home directory

To check your Quota

quota -v

Usage Guidelines
  • Users should run and write their jobs from /scratch/username/foldername only; Users should NOT run and write their jobs from /home/username
  • Users must understand that HPC is a central facility that is shared by all members of the institute. Users should therefore use an optimum number of processing cores by testing for scaleup.
  • A sample script is provided in the home directory of each user.
  • The quota for each user is 100 GB in home directory. There is no per-user limit for the scratch folder.
  • An automatic email will be sent to the HPC user community once 75 % of the scratch is used. Another automatic email will be sent to the users once 85% of the scratch space is used. The deletion of the files by the administrator will commence within 24 hours of the second email.
  • It is strongly recommended that users backup their files-folders periodically, as ISTF will not be having a mechanism to backup users’ data.
  • Deletion of files in scratch directory will be automatically done in scratch 21 days after the last time stamp/update.
  • Users are strictly NOT ALLOWED to run any jobs on the Master Nodes of the HPC cluster.
  • A priority based queueing system is implemented so that all users get a fair share of available resources. The priority will be decided on multiple factors including job size, queue priority, past and present usage, time spent on queue etc.
  • Please note that there is incentive to optimize your usage. You will get more priority!
  • It is strongly recommended that users must request the scheduler to pick cores from the same node whenever possible. If the cores are not available on the same node, the users can request for cores from other nodes.
  • There is no limit on the number of jobs per user. The maximum number of cores per user is set as 80. The maximum number of cores per job is currently set as 40.
  • For any issue or requests pertaining to Sabarmati, please send your email with your working-path, error logs and error screenshots and submit-script only at helpdesk.istf@iitgn.ac.in
Software

Python:

apps/anaconda/python/3.7.5

apps/anaconda/python/3.8.

CP2K:

apps/cp2k/cp2k-8.1-gcc-11.1

Intel Suite:

compilers/intel/parallel_studio_xe_2019.5.075

compilers/intel/parallel_studio_xe_2019.5.075-icc-ifort

compilers/intel/parallel_studio_xe_2019.5.075-mkl

MPICH:

compilers/mpich/gcc485/3.0.4

compilers/mpich/gcc485/3.3

OPENMPI:

compilers/openmpi/gcc485/1.10.7

compilers/openmpi/gcc485/3.1.1rc1

compilers/openmpi/icc_2019_u5/4.0.3

openmpi/3.1.3/2019

ATAT:

intel_oneapi/2021.2/apps/atat/3.36

CASSANDRA:

intel_oneapi/2021.2/apps/cassandra/v1.2

CHARMM:

intel_oneapi/2021.2/apps/charmm/c39b2/parallel

intel_oneapi/2021.2/apps/charmm/c39b2/parallel-mkl

intel_oneapi/2021.2/apps/charmm/c39b2/parallel-mkl-fftw

intel_oneapi/2021.2/apps/charmm/c39b2/serial

intel_oneapi/2021.2/apps/charmm/c39b2/serial-mkl

intel_oneapi/2021.2/apps/charmm/c39b2/serial-mkl-fftw

DEAL.II

intel_oneapi/2021.2/apps/deal.ii/9.2.0

GROMACS:

intel_oneapi/2021.2/apps/gromacs/2019.4

intel_oneapi/2021.2/apps/gromacs/2019.4-mpi

intel_oneapi/2021.2/apps/gromacs/2019.4-mpi-plumed

intel_oneapi/2021.2/apps/gromacs/2019.4-plumed

intel_oneapi/2021.2/apps/gromacs/2019.6

intel_oneapi/2021.2/apps/gromacs/2019.6-mpi

intel_oneapi/2021.2/apps/gromacs/2019.6-mpi-plumed

intel_oneapi/2021.2/apps/gromacs/2019.6-plumed

intel_oneapi/2021.2/apps/gromacs/2020.1

intel_oneapi/2021.2/apps/gromacs/2020.1-mpi

intel_oneapi/2021.2/apps/gromacs/2020.1-mpi-plumed

intel_oneapi/2021.2/apps/gromacs/2020.1-plumed

intel_oneapi/2021.2/apps/gromacs/2020.4

intel_oneapi/2021.2/apps/gromacs/2020.4-mpi

intel_oneapi/2021.2/apps/gromacs/2020.4-mpi-plumed

intel_oneapi/2021.2/apps/gromacs/2020.4-plumed

intel_oneapi/2021.2/apps/gromacs/2020.5-mpi-plumed

intel_oneapi/2021.2/apps/gromacs/2020.5-plumed

intel_oneapi/2021.2/apps/gromacs/2021

intel_oneapi/2021.2/apps/gromacs/2021-mpi

intel_oneapi/2021.2/apps/gromacs/2021-mpi-plumed

intel_oneapi/2021.2/apps/gromacs/2021-plumed

intel_oneapi/2021.2/apps/gromacs/5.1.5

intel_oneapi/2021.2/apps/gromacs/5.1.5-mpi

LAMMPS:

intel_oneapi/2021.2/apps/lammps/29Oct2020

intel_oneapi/2021.2/apps/lammps/29Oct2020-plumed

NAMD:

intel_oneapi/2021.2/apps/namd/2.14

intel_oneapi/2021.2/apps/namd/2.14-plumed

Quantum Espresso:

intel_oneapi/2021.2/apps/q-e/6.2.0

intel_oneapi/2021.2/apps/q-e/6.2.0-plumed

intel_oneapi/2021.2/apps/q-e/6.5

intel_oneapi/2021.2/apps/q-e/6.5-plumed

WRF:

apps/gcc_485/wrf/3.8/dm

apps/gcc_485/wrf/3.9.1/dm

apps/gcc_485/wrf_hydro/5.1.1/envs

apps/gcc_485/wrf_hydro/envs

apps/intel_pstudio_u5/wrf/3.8/sm_dm

apps/intel_pstudio_u5/wrf/3.9/sm_dm

apps/intel_pstudio_u5/wrf/3.9.1/sm_dm

apps/intel_pstudio_u5/wrf/4.1/sm_dm

apps/intel_pstudio_u5/wrf/4.2.1/dm

intel_oneapi/2021.2/apps/wrf/4.2/sm_dm

NetCDF:

intel_oneapi/2021.2/libs/netcdf/hdf5-parallel-1.10.6/netcdf-c-4.7.3-f-4.5.2-cxx-4.3.1

intel_oneapi/2021.2/libs/pnetcdf/1.12.1

libs/intel/pstudio/2019/u5/netcdf/hdf5-parallel-1.10.6/c-4.7.3-f-4.5.2-cxx-4.3.1

libs/intel/pstudio/2019/u5/netcdf/hdf5-serial-1.10.6/c-4.7.3-f-4.5.2-cxx-4.3.1

libs/intel/pstudio/2019/u5/pnetcdf/1.12.1

libs/pgi/19.10/netcdf/hdf5-serial-1.10.6/c-4.7.3-f-4.5.2-cxx-4.3.1

VIC:

apps/intel_pstudio_u5/vic/4.1.2.g

CDO:

apps/intel_pstudio_u5/cdo/1.9.8

Boost:

libs/gcc485/boost/1.64.0

HYPRE:

libs/gcc485/hypre/2.11.2

FFTW:

intel_oneapi/2021.2/libs/fftw3/3.3.9

libs/intel/pstudio/2019/u5/fftw/3.3.8

PGI:

pgi/19.10

pgi/2019

pgi-llvm

pgi-nollvm

PrgEnv-pgi/19.10

Queuing Systems & Scheduler
Queuing Systems

When a job is submitted, it is placed in a queue. There are different queues available for different purposes. The user must select any one of the queues from the ones listed below which is appropriate for his/her computation need.

 

Queue Name

Max Wall time

Max number of cores per job

main

48 hours

40

 

Quantum Espresso Sample Job Script – submit.sh

#!/bin/bash

#SBATCH –job-name=qespresso             # Job name

#SBATCH –ntasks=16                              # Number of MPI tasks (i.e. processes)

#SBATCH –cpus-per-task=1                     # Number of cores per MPI task
#SBATCH –partition=main                       # Queue/Partition name

 

module load intel_oneapi/2021.2/apps/q-e/6.2.0                              cd /scratch/<your username>/<foldername>

mpirun -np 16 pw.x -inp ./<input file> | & tee single_node-$(date +%s).log

Gromacs Sample Job Script – submit.sh

#!/bin/bash

#SBATCH –job-name=gromacs                 # Job name

#SBATCH –ntasks=16                                # Number of MPI tasks (i.e. processes)

#SBATCH –cpus-per-task=1                      # Number of cores per MPI task
#SBATCH –partition=main                        # Queue/Partition name

 

module load intel_oneapi/2021.2/apps/gromacs/2020.4-mpi                         cd /scratch/<your username>/<foldername>

mpirun -np 16 gmx_mpi mdrun -deffnm pme_test -v -ntomp 2 -nsteps 1000 -noconfout -pin on -noddcheck | & tee two_node-$(date +%s).log

Lammps Sample Job Script – submit.sh

#!/bin/bash

#SBATCH –job-name=lamps              # Job name

#SBATCH –ntasks=16                        # Number of MPI tasks (i.e. processes)

#SBATCH –cpus-per-task=1              # Number of cores per MPI task

#SBATCH –partition=main

module load intel_oneapi/2021.2/apps/lammps/29Oct2020

#MACHINEFILE=machinefile

#scontrol show hostname $SLURM_JOB_NODELIST > $MACHINEFILE

mpirun -np 16 lmpmps -in lmp.txt -var lat 4.03208 -var t_final 1 -var mu 2.1 -var ens 1|& tee single_node-$(date +%s).log

Useful Commands
  • For submitting a job: sbatch submit_script.sh
  • For checking queue status: squeue –l
  • For checking node status: sinfo
  • For cancelling the job: scancel <job-id>
  • For checking the genaration of output at runtime: tail -f output.log
  • For scp a file/folder from cluster to your machine: scp -r files/folders username@your-machine-IP-Address:
  • For scp a file/folder from your machine to the cluster: scp -P 2022 -r files/folders username@10.0.138.33:
How-To's

How to Obtain an Account in Sabarmati: Please send an email to helpdesk.istf@iitgn.ac.in with a copy to your supervisor. Also please do let us know the duration of the account required and the list of software which you wish to run.

Name of the cluster: Sabarmati

IP of the cluster: 10.0.138.33

Hostname of the cluster: sabarmati.iitgn.ac.in

Login from Linux:

To login from Linux you simply need to open a Terminal which is installed with the base OS of any flavor of Linux.

image.png

Login from Windows:.

If you use Windows then you can use Putty which can be downloaded from here.

Click “Yes” to continue

External Usage

Computational Resources for External Usage @HPCLab in IITGN

Access to our High Performance Computing (HPC) Facility is granted to external users (Academic/Research organizations and Industry only) through a Committee.

The Proposal from the user should reflect the

  • Technical Details of specific facility needed & its duration
  • Brief scientific narration of their proposed work

Please send your detailed proposal to support.hpc@iitgn.ac.in

Based on the review outcome and feasibility consideration of our facility, we will allocate computer resources.

Obtaining the HPC Account

  • Once the proposal is reviewed, accepted and approved by the committee, the user may download, fillup the HPC application form ink-sign, scan it and email us.
  • Thereafter a unique group/user name will be created for the external user and associated user(s) and thereafter the user credentials would be sent in reply-email.

Usage Policy

Forms

Contact

  • Email: support.hpc@iitgn.ac.in
Funding

Coming Soon…!

Publications

Coming Soon…!

Galleries

FAQ’s

How to Request for HPC account

Please send an email to helpdesk.istf@iitgn.ac.in with a copy to your supervisor. Also please do let us know the duration of the account required and the list of software which you wish to run.

Where is my quota and how do I change it?

The quota for each user is 100 GB in home directory. There is no per-user limit for the scratch folder.
quota -v your username /home/ your Supervisor_grp/username

How many Jobs can I run with how many cores?

There is no limit on the number of jobs per user. The maximum number of cores per user is set as 96. The maximum number of cores per job is currently set as 64.

How the scheduling has been implemented?

The SLURM scheduler will automatically find the required number of processing cores from nodes (even if a node is partially used). Please do not explicitly specify the node number/number of nodes in the script. A priority based queueing system is implemented so that all users get a fair share of available resources. The priority will be decided on multiple factors including job size, queue priority, past and present usage, time spent on queue etc.

Where do I run my Jobs?

Users should run and write their jobs from /scratch/username/foldername only; Users should NOT run and write their jobs from /home/username

User Data Backup

It is strongly recommended that users backup their files-folders periodically, as ISTF will not be having a mechanism to backup users’ data. An automatic email will be sent to the HPC user community once 75 % of the scratch is used. Another automatic email will be sent to the users once 85% of the scratch space is used. The deletion of the files by the administrator will commence within 24 hours of the second email. Deletion of files in scratch directory will be automatically done in scratch 21 days after the last time stamp/update.

Can I run Jobs on the master node or in any other node in an interactive manner without using script and bypassing scheduler?

Users are strictly NOT ALLOWED to run any jobs on the Master Nodes or any other node in an interactive manner. Users must run jobs only using the scripts through the job scheduler.

Whom should I contact for any issue?

For any issue or requests pertaining to SABARMATI, please send your email with your working-path, error logs and error screenshots and submit-script only at helpdesk.istf@iitgn.ac.in