[Wien] xxmr2d:out of memory / estimate memory consumption

Ilias, Miroslav M.Ilias at gsi.de
Tue Jul 4 20:08:29 CEST 2023


Thanks for your answer;


so counting all these sizes (H,S, hpanel, spanel, spanelus...) is good way to estimate memory per one thread.


Ad: "This guess indicates that you should be OK, but do your nodes really have 10Gb/core? That would be unusually large."  Good point, there is some restriction, I think 2gb/core. Again, I have to check it with cluster admin.


Best,


Miro


________________________________
From: Wien <wien-bounces at zeus.theochem.tuwien.ac.at> on behalf of Laurence Marks <laurence.marks at gmail.com>
Sent: 04 July 2023 18:17:15
To: A Mailing list for WIEN2k users
Subject: Re: [Wien] xxmr2d:out of memory / estimate memory consumption

If you look at the output you provided, each local copy of H and S is 3Gb. Adding another 3Gb for luck suggests that you would need about 360 Gb, assuming that you are only running on k-point with 36 cores.

This guess indicates that you should be OK, but do your nodes really have 10Gb/core? That would be unusually large.

Also, 36 cores is really small, a point that Peter made some time ago. 120-256.
--
Professor Laurence Marks (Laurie)
Department of Materials Science and Engineering, Northwestern University
www.numis.northwestern.edu<http://www.numis.northwestern.edu>
"Research is to see what everybody else has seen, and to think what nobody else has thought" Albert Szent-Györgyi

On Tue, Jul 4, 2023, 10:02 Ilias, Miroslav <M.Ilias at gsi.de<mailto:M.Ilias at gsi.de>> wrote:

Greetings,


I have the big system of https://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/msg22609.html .


After fixing the proper openmpi compilation of wien2k I proceed further into the lapw1_mpi module. But here I got the error "xxmr2d:out of memory" for SBATCH parameters  N=1,  n=36 (that is 36 MPI threads), and --mem=450GB.


Now I am looking for a method for estimating the total memory consumption, --mem.


In the  TsOOQg.output1_1 file I see some info about memory spending in MB...Would it be please possible to estimate the SBATCH memory need from this info ?




MPI-parallel calculation using    36 processors
Scalapack processors array (row,col):   6   6
Matrix size        84052
Optimum Blocksize for setup 124 Excess %  0.428D-01
Optimum Blocksize for diag  30 Excess %  0.143D-01
Base Blocksize   64 Diagonalization   32
         allocate H      2997.6 MB          dimensions 14016 14016
         allocate S      2997.6 MB          dimensions 14016 14016
    allocate spanel        13.7 MB          dimensions 14016    64
    allocate hpanel        13.7 MB          dimensions 14016    64
  allocate spanelus        13.7 MB          dimensions 14016    64
      allocate slen         6.8 MB          dimensions 14016    64
        allocate x2         6.8 MB          dimensions 14016    64
  allocate legendre        89.0 MB          dimensions 14016    13    64
allocate al,bl (row)         4.7 MB          dimensions 14016    11
allocate al,bl (col)         0.0 MB          dimensions    64    11
        allocate YL         3.4 MB          dimensions    15 14016     1
number of local orbitals, nlo (hamilt)     1401
      allocate YL          20.5 MB          dimensions    15 84052     1
      allocate phsc         1.3 MB          dimensions 84052

Time for al,bl    (hamilt, cpu/wall) :        24.49       24.82
Time for legendre (hamilt, cpu/wall) :         5.19        5.21


And lapw1.error  :

**  Error in Parallel LAPW1
**  LAPW1 STOPPED at Tue 04 Jul 2023 10:36:17 AM CEST
**  check ERROR FILES!
Error in LAPW1

Best,


Miro







_______________________________________________
Wien mailing list
Wien at zeus.theochem.tuwien.ac.at<mailto:Wien at zeus.theochem.tuwien.ac.at>
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://zeus.theochem.tuwien.ac.at/pipermail/wien/attachments/20230704/5bcd30e3/attachment.htm>


More information about the Wien mailing list