[Wien] lapw2_mpi crashes during TB-mBJ calculations for WIEN2k_23.2

髙村仁 takamura at material.tohoku.ac.jp
Sun Jun 18 13:06:25 CEST 2023


Dear WIEN2k developers and users,

I would like to share the following situation I have for WIEN2k_23.2.
WIEN2k_23.2 works fine for me, except for the crash of lapw2_mpi during TB-mBJ calculations using MPI parallel. First, I have performed TB-mBJ calculations for some oxides, such as MgO and TiO2, using WIEN2k_21.1 and MPI parallel without any problems. The results, e.g., corrected band gaps, are also excellent. Standard SCF calculations using WIEN2k_23.2, including MPI parallel, are also fine.

Meanwhile, after init_mbj (now -tau switch is on for lapw2), MPI parallel calculations using WIEN2k_23.2 always crash during the first lapw2 process. The crash is reproducible for any case.struct I tested, including TiO2 on the Wien2k website. It should also be noted that serial or only k-point parallel (without MPI) TB-mBJ calculations are fine for the same WIEN2k_23.2 environment. The error messages regarding the lapw2_mpi crashes are just as follows:

lapw2.error:
**  testerror: Error in Parallel LAPW2
lapw2_i.error:
Error in LAPW2

So, this crash appears to be a sudden death of MPI processes; STDOUT actually shows the following MPI error messages for 4 MPI processes:

...
LAPW1 END
LAPW1 END
LAPW1 END
LAPW1 END
LAPW2 - FERMI; weights written
Abort(805421582) on node 0 (rank 0 in comm 0): Fatal error in PMPI_Recv: Message truncated, error stack:
PMPI_Recv(171): MPI_Recv(buf=0x7ffdcd24c678, count=1, MPI_INTEGER, src=1, tag=MPI_ANY_TAG, comm=0x84000005, status=0x2ae8b59a3fe0) failed
(unknown)(): Message truncated
Abort(67224078) on node 0 (rank 0 in comm 0): Fatal error in PMPI_Recv: Message truncated, error stack:
PMPI_Recv(171): MPI_Recv(buf=0x7ffc824e7f78, count=1, MPI_INTEGER, src=1, tag=MPI_ANY_TAG, comm=0x84000005, status=0x2af2d64d3fe0) failed
(unknown)(): Message truncated
Abort(939639310) on node 0 (rank 0 in comm 0): Fatal error in PMPI_Recv: Message truncated, error stack:
PMPI_Recv(171): MPI_Recv(buf=0x7ffea8ea88f8, count=1, MPI_INTEGER, src=1, tag=MPI_ANY_TAG, comm=0x84000005, status=0x2b513d417fe0) failed
(unknown)(): Message truncated
Abort(402768398) on node 0 (rank 0 in comm 0): Fatal error in PMPI_Recv: Message truncated, error stack:
PMPI_Recv(171): MPI_Recv(buf=0x7ffde26becf8, count=1, MPI_INTEGER, src=1, tag=MPI_ANY_TAG, comm=0x84000005, status=0x2b92d0adffe0) failed
(unknown)(): Message truncated
...

I thought this might be only the case for my cluster:
 12 nodes x Intel(R) Xeon(R) Gold 6226R CPU @ 2.90GHz, Linux 3.10.0-1160.el7.x86_64
 Intel compilers (2021.7.1 20221019)
 Intel MPI libraries (Intel(R) MPI Library for Linux* OS, Version 2021.7 Build 20221022);
So, I compiled WIEN2k_23.2 on a different cluster with different versions of Intel compilers and MPI libraries (ifort (IFORT) 19.1.3.304 20200925 and Intel(R) MPI Library for Linux* OS, Version 2019 Update 9 Build 20200923). The results are exactly the same, i.e., no problem for TB-mBJ calculations with MPI parallel for WIEN2k_21.1, but lapw2 always crashes for WIEN2k_23.2 only when TB-mBJ calculations are performed using MPI parallel (no problem for serial or k-point parallel (no MPI)). Again, this crash is reproducible for any case.struct for oxides I tested.

I would greatly appreciate any comments and suggestions to solve this problem.

Best regards,


Dr. Hitoshi Takamura
Tohoku Univ., Japan



More information about the Wien mailing list