[Wien] slurm script
pluto
pluto at physics.ucdavis.edu
Mon Jun 17 16:12:18 CEST 2024
Dear All,
I am trying to setup the slurm submission without paswordless ssh. My
parameters are:
- only k-parallel and OMP (no mpi)
- 8 cores per node (it is an older cluster)
I have started with this script:
http://www.wien2k.at/reg_user/faq/slurm.job
and slightly adopted it (see the bottom of the email).
I have gotten the following error:
[lplucin at iffslurm Au-bulk-test]$ more slurm-69063.out
iffcluster0414: Using Ethernet for MPI communication.
SBATCH: Command not found.
DIRECTORY = /local/th1/topo/lplucin/Au-bulk-test
WIENROOT = /Users/lplucin/WIEN2k
SCRATCH = /SCRATCH
SLURM_NTASKS: Undefined variable.
Then I tried to define:
setenv SLURM_NTASKS_PER_NODE 8
setenv SLURM_NTASKS 32
but this did not help much, something started on one node, but crashed
immediately.
Any advice would be appreciated!
Best,
Lukasz
#!/bin/tcsh
#SBATCH -J compound
#SBATCH --tasks-per-node=8
##SBATCH --mail-user=xxx at yyy.zzz
##SBATCH --mail-type=END
## The next 3 lines may allow some the definition of specific queues
depending on your cluster
#SBATCH --partition topo
##SBATCH --qos=XXX
##SBATCH --account=XXX
#
#########################
#SBATCH -N 4 # define number of nodes
set mpijob=1 # define number of cores for one lapw1/2 mpi-job
##
## The lines above need to be adapted depending our your case (size of
the
## problem and number of k-points).
## The example requests 64 cores on 4 nodes (because core_per_nodes is
16),
## and configures WIEN2k such that 4 k-point-parallel jobs will run on
16-cores
## (one node) in mpi-mode, i.e. it assumes that you have 4, 8, 12 ...
k-points
## and the case is of a size that it runs reasonably on 16 cores.
## For a bigger case and only 1 k-point, you would set mpijob=64
instead....
#########################
set cores_per_node=8
setenv OMP_NUM_THREADS 1
set lapw2_vector_split=4
setenv PSM_SHAREDCONTEXTS_MAX 4
setenv SCRATCH /SCRATCH # set it to
recommended value
More information about the Wien
mailing list