[Wien] MPI Error while running lapw0_mpi
venky ch
chvenkateshphy at gmail.com
Wed Mar 23 09:31:14 CET 2022
Dear Wien2k users,
I have successfully installed the wien2k.21 version in the HPC cluster.
However, while running a test calculation, I am getting the following error
so that the lapw0_mpi crashed.
=========
/home/proj/21/phyvech/.bashrc: line 43: ulimit: stack size: cannot modify
limit: Operation not permitted
/home/proj/21/phyvech/.bashrc: line 43: ulimit: stack size: cannot modify
limit: Operation not permitted
setrlimit(): WARNING: Cannot raise stack limit, continuing: Invalid argument
setrlimit(): WARNING: Cannot raise stack limit, continuing: Invalid argument
setrlimit(): WARNING: Cannot raise stack limit, continuing: Invalid argument
setrlimit(): WARNING: Cannot raise stack limit, continuing: Invalid argument
setrlimit(): WARNING: Cannot raise stack limit, continuing: Invalid argument
setrlimit(): WARNING: Cannot raise stack limit, continuing: Invalid argument
setrlimit(): WARNING: Cannot raise stack limit, continuing: Invalid argument
setrlimit(): WARNING: Cannot raise stack limit, continuing: Invalid argument
setrlimit(): WARNING: Cannot raise stack limit, continuing: Invalid argument
setrlimit(): WARNING: Cannot raise stack limit, continuing: Invalid argument
setrlimit(): WARNING: Cannot raise stack limit, continuing: Invalid argument
setrlimit(): WARNING: Cannot raise stack limit, continuing: Invalid argument
setrlimit(): WARNING: Cannot raise stack limit, continuing: Invalid argument
setrlimit(): WARNING: Cannot raise stack limit, continuing: Invalid argument
setrlimit(): WARNING: Cannot raise stack limit, continuing: Invalid argument
setrlimit(): WARNING: Cannot raise stack limit, continuing: Invalid argument
Abort(744562703) on node 1 (rank 1 in comm 0): Fatal error in PMPI_Bcast:
Other MPI error, error stack:
PMPI_Bcast(432).........................: MPI_Bcast(buf=0x7ffd8f8d359c,
count=1, MPI_INTEGER, root=0, comm=MPI_COMM_WORLD) failed
PMPI_Bcast(418).........................:
MPIDI_Bcast_intra_composition_gamma(391):
MPIDI_NM_mpi_bcast(153).................:
MPIR_Bcast_intra_tree(219)..............: Failure during collective
MPIR_Bcast_intra_tree(211)..............:
MPIR_Bcast_intra_tree_generic(176)......: Failure during collective
[1] Exit 15 mpirun -np 32 -machinefile
.machine0 */home/proj/21/phyvech/soft/win2k2/lapw0_mpi
lapw0.def >> .time00*
cat: No match.
grep: *scf1*: No such file or directory
grep: lapw2*.error: No such file or directory
=========
the .machines file is
======= for 102 reduced k-points =========
#
lapw0:node16:16 node22:16
51:node16:16
51:node22:16
granularity:1
extrafine:1
========
"export OMP_NUM_THREADS=1" has been used in the job submission script.
"run_lapw -p -NI -i 400 -ec 0.00001 -cc 0.0001" has been used to start the
parallel calculations in available nodes.
Can someone please explain to me where I am going wrong here. Thanks in
advance.
Regards,
Venkatesh
Physics department
IISc Bangalore, INDIA
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://zeus.theochem.tuwien.ac.at/pipermail/wien/attachments/20220323/8dccab91/attachment.htm>
More information about the Wien
mailing list