Avogadro
(Difference between revisions)
| Line 23: | Line 23: | ||
:where "account" is the user's account. | :where "account" is the user's account. | ||
| + | |||
Then, in a different shell, login as: | Then, in a different shell, login as: | ||
| Line 28: | Line 29: | ||
ssh -p 2000 account@localhost | ssh -p 2000 account@localhost | ||
| + | |||
| + | Through port 2000 users can also transfer files directly via the tunnel using the command scp. | ||
== Queues == | == Queues == | ||
| − | The queue manager is [[ | + | The queue manager is [[SLURM]] and the following queues are available: |
* '''avogadro''': max nodes 40, max walltime 336:00:00 (2 weeks) | * '''avogadro''': max nodes 40, max walltime 336:00:00 (2 weeks) | ||
| − | |||
| − | |||
| − | |||
| − | == Example | + | == Example SLURM file == |
| − | A typical [[ | + | A typical [[SLURM]] script will be as follow (see the [[Support]] page for more help): |
| − | <nowiki>#</nowiki> | + | <nowiki>#</nowiki>SBATCH --job-name=charmm |
| − | <nowiki>#</nowiki> | + | <nowiki>#</nowiki>SBATCH --ntasks 1 |
| − | <nowiki>#</nowiki> | + | <nowiki>#</nowiki>SBATCH --cpus-per-task 1 |
| − | <nowiki>#</nowiki> | + | <nowiki>#</nowiki>SBATCH --partition=avogadro |
| − | <nowiki>#</nowiki> | + | <nowiki>#</nowiki>SBATCH --account=avogadro |
| − | <nowiki>#</nowiki> | + | <nowiki>#</nowiki>SBATCH --time=100:00:00 |
''commands to execute'' | ''commands to execute'' | ||
where the parts in ''italic'' should be changed as appropriate. | where the parts in ''italic'' should be changed as appropriate. | ||
Revision as of 14:25, 7 August 2021
Contents |
Description
Avogadro is the largest, but oldest, cluster in the C3P facility. It is equipped with the following hardware/software:
- 71 nodes with 2 x CPU Intel Woodcrest Dual Core 2.6 GHz (4 cores), 2 x HD SAS 72 GB, 8 GB RAM, Infiniband, OS Red Hat Enterprise Linux WS release 4
- 9 nodes with 2 x CPU Intel Woodcrest Quad Core 2.6 GHz (8 cores), 2 x HD SAS 72 GB, 16 GB RAM, Infiniband, OS Red Hat Enterprise Linux WS release 4
for a total of 80 nodes, 356 cores.
Access
- Linux and Mac OS users can login using a terminal. Windows 10 users can use the PowerShell.
The first step si to open a tunnel. Within the DiSC internet this can be simply done as:
ssh -L 2000:192.168.16.253:22 account@192.168.9.15 -p 7000
- or from outside the Department using
ssh -L 2000:192.168.16.253:22 account@147.162.63.10 -p 7000
- where "account" is the user's account.
Then, in a different shell, login as:
ssh -p 2000 account@localhost
Through port 2000 users can also transfer files directly via the tunnel using the command scp.
Queues
The queue manager is SLURM and the following queues are available:
- avogadro: max nodes 40, max walltime 336:00:00 (2 weeks)
Example SLURM file
A typical SLURM script will be as follow (see the Support page for more help):
#SBATCH --job-name=charmm #SBATCH --ntasks 1 #SBATCH --cpus-per-task 1 #SBATCH --partition=avogadro #SBATCH --account=avogadro #SBATCH --time=100:00:00 commands to execute
where the parts in italic should be changed as appropriate.