[Wien] MPI Error

leila mollabashi le.mollabashi at gmail.com
Sat May 29 08:39:45 CEST 2021


Dear all wien2k users,
Following the previous comment referring me to the admin, I contacted the
cluster admin. By the comment of the admin, I recompiled Wien2k
successfully using the cluster modules.
>Once the blacs problem has been fixed,
For example, is the following correct?
libmkl_blacs_openmpi_lp64.so =>
/opt/exp_soft/local/generic/intel/mkl/lib/intel64/libmkl_blacs_openmpi_lp64.so
(0x00002b21efe03000)
>the next step is to run lapw0 in
sequential and parallel mode.
>Add:
x lapw0     and check the case.output0 and case.scf0 files (copy them to
a different name) as well as the message from the queuing system. ...
The “x lapw0” and “mpirun -np 4 $WIENROOT/lapw0_mpi lapw0.def” are
interactively executed correctly.
The “x lapw0 -p” is also correctly executed using the following “.machines”
file:
lapw0:e0017:4
>The same thing could be made with lapw1
The “x lapw1” and “mpirun -np 4 $WIENROOT/lapw1_mpi lapw1.def” are also
correctly executed interactively with no problem. But “x lapw1 -p” stops
when I use the following “.machines” file:
1:e0017:2
1:e0017:2
bash: mpirun: command not found
The output files are gathered into https://files.fm/u/7cssehdck.
Would you, please, help me to fix the parallel problem too?
Sincerely yours,
Leila
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://zeus.theochem.tuwien.ac.at/pipermail/wien/attachments/20210529/440f90c9/attachment.htm>


More information about the Wien mailing list