[Wien] mpi problem
Peter Blaha
pblaha at theochem.tuwien.ac.at
Mon Oct 31 16:13:57 CET 2005
Hi,
You did not mention anything about Scalapack and blacks, which is also
required by lapw1mpi.
I don't think that the finegrain parallel version of WIEN2k is of much use
for you. Due to SCALAPACK used for diagonalization, lapw1mpi uses more
than twice the memory of lapw1 and is also twice as slow.
It really pays off only for more nodes (4 as minimum, better
8,16,32,36,64) AND it requires fast communication (1Gbit network is too
slow).
In addition it is usefull only for "big" cases (from 50 atoms upwards)
I would recommend the k-point parallel version, which is very efficient.
In addition, try the mpi version of lapw0. Communication is small in this
case and Scalapack is not necessary.
> Dear WIEN users,
>
> I've faced a problem with fine-grained parallel calculations on our little
> cluster of three Pentium4 3.0 GHz machines. I use mpich2 (v. 1.0.2p1),
> ifort 9.0 and mkl 8.0. The problem is that lapw1_mpi processes become
> sleeping after short time past the start and never wake up. I used several
> different versions of the .machines file, the things were the same. For
> example:
>
> #################
> 1:fabio:2 boris:2
> 1:srv20:2
> granularity:1
> extrafine
> #################
>
> Any idea why such a trouble happens?
>
> Thank you for your help.
>
> --
> ?????????
> _______________________________________________
> Wien mailing list
> Wien at zeus.theochem.tuwien.ac.at
> http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
>
P.Blaha
--------------------------------------------------------------------------
Peter BLAHA, Inst.f. Materials Chemistry, TU Vienna, A-1060 Vienna
Phone: +43-1-58801-15671 FAX: +43-1-58801-15698
Email: blaha at theochem.tuwien.ac.at WWW: http://info.tuwien.ac.at/theochem/
--------------------------------------------------------------------------
More information about the Wien
mailing list