Avogadro

(Difference between revisions)
Jump to: navigation, search
Line 23: Line 23:
  
 
:where "account" is the user's account.
 
:where "account" is the user's account.
 +
  
 
Then, in a different shell, login as:
 
Then, in a different shell, login as:
Line 28: Line 29:
 
  ssh -p 2000 account@localhost
 
  ssh -p 2000 account@localhost
  
 +
 +
Through port 2000 users can also transfer files directly via the tunnel using the command scp.
  
  
 
== Queues ==
 
== Queues ==
  
The queue manager is [[PBS]] and the following queues are available:
+
The queue manager is [[SLURM]] and the following queues are available:
  
 
* '''avogadro''': max nodes 40, max walltime 336:00:00 (2 weeks)
 
* '''avogadro''': max nodes 40, max walltime 336:00:00 (2 weeks)
* '''avogadro-test''': max nodes 1, max walltime 10 min
 
* '''avogadro-long''': max 10 nodes, max walltime 1500:00:00 (ca. 2 months). This queue is to be '''used strictly to run jobs that do not allow checkpointing of the status of the calculation'''. This is considered an '''exceptional case''', thus this penalty is applied: a maximum of total 2 jobs can run at the same time on this queue.
 
 
  
== Example PBS file ==
+
== Example SLURM file ==
  
A typical [[PBS]] input file script will be as follow (see the [[Support]] page for more help):
+
A typical [[SLURM]] script will be as follow (see the [[Support]] page for more help):
  
  <nowiki>#</nowiki>!/bin/sh --login
+
  <nowiki>#</nowiki>SBATCH --job-name=charmm
  <nowiki>#</nowiki>PBS -N ''jobname''
+
  <nowiki>#</nowiki>SBATCH --ntasks 1
  <nowiki>#</nowiki>PBS -A ''account''
+
  <nowiki>#</nowiki>SBATCH --cpus-per-task 1
  <nowiki>#</nowiki>PBS -q ''queue''
+
  <nowiki>#</nowiki>SBATCH --partition=avogadro
  <nowiki>#</nowiki>PBS -l nodes=''n''
+
  <nowiki>#</nowiki>SBATCH --account=avogadro
  <nowiki>#</nowiki>PBS -w ''time in hh:mm:ss format''
+
  <nowiki>#</nowiki>SBATCH --time=100:00:00
 
   
 
   
 
  ''commands to execute''
 
  ''commands to execute''
  
 
where the parts in ''italic'' should be changed as appropriate.
 
where the parts in ''italic'' should be changed as appropriate.

Revision as of 14:25, 7 August 2021

Contents

Description

Avogadro is the largest, but oldest, cluster in the C3P facility. It is equipped with the following hardware/software:

  • 71 nodes with 2 x CPU Intel Woodcrest Dual Core 2.6 GHz (4 cores), 2 x HD SAS 72 GB, 8 GB RAM, Infiniband, OS Red Hat Enterprise Linux WS release 4
  • 9 nodes with 2 x CPU Intel Woodcrest Quad Core 2.6 GHz (8 cores), 2 x HD SAS 72 GB, 16 GB RAM, Infiniband, OS Red Hat Enterprise Linux WS release 4

for a total of 80 nodes, 356 cores.


Access

  • Linux and Mac OS users can login using a terminal. Windows 10 users can use the PowerShell.

The first step si to open a tunnel. Within the DiSC internet this can be simply done as:

ssh -L 2000:192.168.16.253:22 account@192.168.9.15 -p 7000
or from outside the Department using
ssh -L 2000:192.168.16.253:22 account@147.162.63.10 -p 7000
where "account" is the user's account.


Then, in a different shell, login as:

ssh -p 2000 account@localhost


Through port 2000 users can also transfer files directly via the tunnel using the command scp.


Queues

The queue manager is SLURM and the following queues are available:

  • avogadro: max nodes 40, max walltime 336:00:00 (2 weeks)

Example SLURM file

A typical SLURM script will be as follow (see the Support page for more help):

#SBATCH --job-name=charmm
#SBATCH --ntasks 1
#SBATCH --cpus-per-task 1
#SBATCH --partition=avogadro
#SBATCH --account=avogadro
#SBATCH --time=100:00:00

commands to execute

where the parts in italic should be changed as appropriate.

Personal tools
Namespaces
Variants
Actions
Navigation
Events
Toolbox