[Wien] MPI error

leila mollabashi le.mollabashi at gmail.com
Wed May 19 21:12:32 CEST 2021


Dear all wien2k users,
Thankyou for your reply and guides.
 > You need to link with the blacs library for openmpi.
I unsuccessfully recompiled wien2k by linking with the blacs library for
openmpias “mkl_blacs_openmpi_lp64” due to gfortran errors. The video of
this recompile is uploaded to a website which is available from:
*https://files.fm/u/zzwhjjj5q
<https://files.fm/u/zzwhjjj5q>* link. The SRC_lapw0/lapw1 compile.msg files
are uploaded to: *https://files.fm/u/zuwukxy8x
<https://files.fm/u/zuwukxy8x>*,* https://files.fm/u/cep4pvnvd
<https://files.fm/u/cep4pvnvd>*.
The openmpi and fftw of the cluster are compiled with gfortran. So I have
also installed openmpi 4.1.0 and fftw3.3.9 in my home directory after
loading ifort and icc with the following commands:
./configure--prefix=/home/users/mollabashi/expands/openmpi CC=icc F77=ifort
FC=ifort--with-slurm --with-pmix --enable-shared --with-hwloc=internal
./configure--prefix=/home/users/mollabashi/expands/fftw MPICC=mpicc CC=icc
F77=ifort --enable-mpi --enable-openmp  --enable-shared
By this way, Wien2k compiled correctly as shown in the video in
*https://files.fm/u/rk3vfqv5g
<https://files.fm/u/rk3vfqv5g>* link. But mpi run does not run due to the
error about openmpi as in: *https://files.fm/u/tcz2fvwpg
<https://files.fm/u/tcz2fvwpg>*. The .bashrc, submit.sh and slurm.out
filesare in *https://files.fm/u/dnrrwqguy <https://files.fm/u/dnrrwqguy>*
link.
Would you please guide me how to solve the gfortran errors?
Should I install openmpi with another configuration to solve the slurm
error of mpi calculation?
Sincerely yours,
Leila

On Thu, May 6, 2021 at 9:44 PM Laurence Marks <laurence.marks at gmail.com>
wrote:

> Peter beat me to the response -- please do as he says and move stepwise
> forward, posting single steps if they fail.
>
> On Thu, May 6, 2021 at 10:38 AM Peter Blaha <pblaha at theochem.tuwien.ac.at>
> wrote:
>
>> Once the blacs problem has been fixed, the next step is to run lapw0 in
>> sequential and parallel mode.
>>
>> Add:
>>
>> x lapw0     and check the case.output0 and case.scf0 files (copy them to
>> a different name) as well as the message from the queuing system.
>>
>> add:   mpirun -np 4 $WIENROOT/lapw0_mpi lapw0.def
>> and check the messages and compare the results with the previous
>> sequential run.
>>
>> And finally:
>> create a .machines file with:
>> lapw0:localhost:4
>>
>> and execute
>> x lapw0 -p
>>
>> -------------
>> The same thing could be made with lapw1
>>
>>
>> --
> Professor Laurence Marks
> Department of Materials Science and Engineering
> Northwestern University
> www.numis.northwestern.edu
> "Research is to see what everybody else has seen, and to think what nobody
> else has thought" Albert Szent-Györgyi
> _______________________________________________
> Wien mailing list
> Wien at zeus.theochem.tuwien.ac.at
> http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
> SEARCH the MAILING-LIST at:
> http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://zeus.theochem.tuwien.ac.at/pipermail/wien/attachments/20210519/c68a3e82/attachment.htm>


More information about the Wien mailing list