[Wien] lapw2c tries to read an anomalous amount of data
Peter Blaha
pblaha at theochem.tuwien.ac.at
Fri Jul 26 22:46:10 CEST 2019
Dear all,
This is a long thread by now, and it comes because of insufficient
information from the beginning. I guess we all were thinking of a huge
mpi-calculation ....
The tread should start like:
I'm running a case with XX atoms (matrix-size YYYYY), with/without
inversion symmetry; NN k-points.
It is running on a single PC (Intel XX with 24 GB RAM and 4 cores) and I
use WIEN2k_19.1 with ifort/mkl compilation.
I'm running 4 k-parallel jobs and lapw1 runs fine. However, in lapw2 .....
------------------------
Size: lapw2 reads the vector files k-point by k-point (one at the time),
so if you have more k-points than the size of the vector file has
nothing to do with the memory of lapw2.
This size will be NMAT*NUME only. There are other large arrays with
(NMAT,lm, NUME) (Alms,BLMs, CLMs,..).
However, lapw2 runs with a loop about atoms, so when you have more then
1 k-point, it needs to read these vector files NATOM times.
And with 4 k-parallel threads this is a lot of PARALLEL disk-I/O for a
poor SATA disk. I can imagine that this is the bottleneck.
Disk I/O can be reduced in at least 2 ways:
As Laurence Marks suggested, reduce E-top in case.in1 such that you have
only few unoccupied states included.
The second, even more efficient way was suggested by Pavel Ondracka: Use
OMP-parallelization, eg. export OMP_NUM_THREAD 2 and only 2 k-parallel
jobs. If this is still too much parallel I/O, you could also use 4
OpenMP parallel threads and no k-parallelization at all, but this will
be a little slower.
You may also try the mpi-parallel version, but definitely ONLY if you
have a recent ELPA installed. Otherwise it will be much slower. However,
the mpi-version of lapw1 needs more memory (but still less than 4
k-parallel lapw1) ...
Regards
Am 26.07.2019 um 10:37 schrieb Laurence Marks:
> If I remember right, the largest piece of memory is the vector file so
> this should be a reasonable estimate.
>
> During the scf convergence you can reduce this by *carefully* changing
> the numbers at the end of case.in1(c). You don't really need to go to
> 1.5 Ryd above E_F (and similarly reduce nband for ELPA). For DOS etc
> later you increase these and rerun lapw1 etc.
>
> On Fri, Jul 26, 2019 at 9:27 AM Luc Fruchter <luc.fruchter at u-psud.fr
> <mailto:luc.fruchter at u-psud.fr>> wrote:
>
> Yes, I have shared memory. Swap on disk is disabled, so the system must
> manage differently here.
>
> I just wonder now: is there a way to estimate the memory needed for the
> lapw2s, without running scf up to these ? Is this the total .vector
> size ?
> _______________________________________________
> Wien mailing list
> Wien at zeus.theochem.tuwien.ac.at <mailto:Wien at zeus.theochem.tuwien.ac.at>
> https://urldefense.proofpoint.com/v2/url?u=http-3A__zeus.theochem.tuwien.ac.at_mailman_listinfo_wien&d=DwICAg&c=yHlS04HhBraes5BQ9ueu5zKhE7rtNXt_d012z2PA6ws&r=U_T4PL6jwANfAy4rnxTj8IUxm818jnvqKFdqWLwmqg0&m=xDusGo0KphXQ04Dl6Wf9xCaKVxoL-U4kVBCyrmtP_J4&s=f2IY4a60LXX2_8DTCObJe-nnPgNcIVqZRBsqpTqrRQU&e=
> SEARCH the MAILING-LIST at:
> https://urldefense.proofpoint.com/v2/url?u=http-3A__www.mail-2Darchive.com_wien-40zeus.theochem.tuwien.ac.at_index.html&d=DwICAg&c=yHlS04HhBraes5BQ9ueu5zKhE7rtNXt_d012z2PA6ws&r=U_T4PL6jwANfAy4rnxTj8IUxm818jnvqKFdqWLwmqg0&m=xDusGo0KphXQ04Dl6Wf9xCaKVxoL-U4kVBCyrmtP_J4&s=g-rsFk4xC7uHaddVQZCS2nKpLz-AyX4WWPpytDCUObI&e=
>
>
>
> --
> Professor Laurence Marks
> Department of Materials Science and Engineering
> Northwestern University
> www.numis.northwestern.edu <http://www.numis.northwestern.edu/>
> Corrosion in 4D: www.numis.northwestern.edu/MURI
> <http://www.numis.northwestern.edu/MURI>
> Co-Editor, Acta Cryst A
> "Research is to see what everybody else has seen, and to think what
> nobody else has thought"
> Albert Szent-Gyorgi
>
> _______________________________________________
> Wien mailing list
> Wien at zeus.theochem.tuwien.ac.at
> http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
> SEARCH the MAILING-LIST at: http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html
>
--
--------------------------------------------------------------------------
Peter BLAHA, Inst.f. Materials Chemistry, TU Vienna, A-1060 Vienna
Phone: +43-1-58801-165300 FAX: +43-1-58801-165982
Email: blaha at theochem.tuwien.ac.at WIEN2k: http://www.wien2k.at
WWW:
http://www.imc.tuwien.ac.at/tc_blaha-------------------------------------------------------------------------
More information about the Wien
mailing list