[Wien] Wien post from pascal.boulet

pboulet pascal.boulet at univ-amu.fr
Mon Mar 20 23:04:38 CET 2023


Dear Peter,

Thank you for your help.

I am rerunning the calculation and I think I made a mistake before: I ran a SCF with default options from init_lapw. Then I tried to run an optimization (volume change) with reduced RMT but without rerunning the SCF with the smaller RMT. 

Now, I have first run the SCF with the smaller RMT and I am continuing with the volume change (optimize.job). It is working fine.

Still, I answer your question below.

Best regards
Pascal

> Le 16 mars 2023 à 19:39, Peter Blaha <peter.blaha at tuwien.ac.at> a écrit :
> 
> Your mail is too big. Here an excerpt and some reply:
> 
> How did you modify   optimize.job  ?  What is your run_lapw line ?

Actually, I only issued the commands:
x dstart
optimized.job
in the submission script to slurm. No more change…

I did not modify the optimize.job script.


> 
> Probably here is the error. You said:
> 
> I calculated the stress tensor (-pres 0.1) ...

Sorry: I made a mistake in the email! I put -str 0.1 as a command option.


> 
> -pres 0.1  is not a valid option for run_lapw. It should be -str 0.1
> 
> Did you read the comments in UG or the update section on the web about the stress tensor ?

Yes, I did. But one think that is not clear to me: can we do it (-str 0.1) after a first complete SCF cycle has been done? I mean that I ran a SCF and then restarted by including the -str 0.1 -I -i 1 options. Maybe it is wrong.

Anyway… This option is not crucial for my purpose. That was just for giving a try.

> It works "similar" to the forces (-fc 1). Only when the partial pressure is converged, the additional terms in lapw2 are calculated , giving the total tensor.  The partial tensor is "meaningless", i.e. don't worry about its values.
> Remember: The additional term in lapw2 will take quite some cpu time.
> 
> If you see ***INFO   in the :ENE line, you should also grep for :INFO. Most likely not crucual.
> 
> PS: I saw that you have just 5 neq atoms in the cell. Such cells I run usually on a simple PC and even there it does NOT need 5 minutes. What is your :RKM ? For a small matrix size using 64 cores in mpi (I hope you compiled with ELPA) it may be slower than sequential or mpi with less cores. Remember: More cores does not necessarily mean that it runs faster - in fact it can also run MUCH SLOWER !


Well, actually I think it is not just a question of number of atoms. There are 58 atoms in the conventional cell, but I agree, only 5 are inequivalent.

But the problem is the way the computers hours used on my project are counted.

I have tried with a serial calculation, it takes 4 hours. By the way I use RKM=7.0, Gmax=16, 26 k-points, 11400 PW.
If I use 128 cores (=1 node) it take 40 minutes/core, so 85 hours. But, whether I use 1 core or 128, the system counts 1 node (or 2 if I between 129 and 256 cores, etc.).  So 4 hours on 1 cores cost 512h on my account! 
So the waste of time depends on which side you consider it. But perhaps the scaling can be improved… 

Best regards
Pascal

> 
> 
> -----------------------------------------------
> Thank you Peter and Mark for your responses.
> 
> I have checked the structure, it looks ok and the ‘a’ parameter (cubic) in the struct file seems to agree with the experimental one (19.5 Bohr versus 19.6), but when I calculated the stress tensor (-pres 0.1) at the end of the first SCF I got -26474. GPa. Strange such a large value…
> 
> The initial SCF went fine, but I have not tried for other volumes.
> 
> As you suggested, I have run again the sequence:
> x dstart
> optimize.job
> 
> but I get the same result.
> 
> ---
> To be complete, I split the 128-cores node (1 node=2 processors, 64 cores each) into two for k-points parallelization; I get 2 files: case.klist_1 and case.klist_2. Here is the .machines content:
> # OMP parallelization
> omp_global:1
> #omp_lapw1:1
> #omp_lapw2:1
> #omp_lapwso:1
> #omp_dstart:1
> #omp_sumpara:1
> #omp_nlvdw:1
> # k-point parallelization for lapw1/2 hf lapwso qtl irrep  nmr  optic
> 1:irene4046:64
> 1:irene4046:64
> # MPI parallelization for dstart lapw0 nlvdw
> dstart: irene4046:6
> lapw0: irene4046:6
> nlvdw: irene4046:6
> granularity:1
> extrafine:1
> -----
> 
> Result for :NEC01:
> :NEC01: NUCLEAR AND ELECTRONIC CHARGE    760.00000   759.93461
> :NEC01: NUCLEAR AND ELECTRONIC CHARGE    760.00000   759.93464
> :NEC01: NUCLEAR AND ELECTRONIC CHARGE    760.00000   759.93542
> :NEC01: NUCLEAR AND ELECTRONIC CHARGE    760.00000   759.93483
> 
> For :ENE:
> :ENE  : *INFO***** TOTAL ENERGY IN Ry =      -100251.28453414
> :ENE  : *INFO***** TOTAL ENERGY IN Ry =      -100251.20603572
> :ENE  : *INFO***** TOTAL ENERGY IN Ry =      -100249.25329142
> :ENE  : *INFO***** TOTAL ENERGY IN Ry =      -100252.25418212
> 
> Nothing for :WAR
> 
> 
> 
> For the output of mixing, case.outputm:
> :NEC01: NUCLEAR AND ELECTRONIC CHARGE    760.00000   759.93483
> :OTO   : INTERSTITIAL CHARGE  =    71.485237
> 
> :NEC02: NUCLEAR AND ELECTRONIC CHARGE    760.00000   761.00379958
> :MIX  :   PRATT  REGULARIZATION:  2.00E-04 GREED: 0.00100
> :CTO   : INTERSTITIAL CHARGE  =    70.478002
> 
> :NEC03: NUCLEAR AND ELECTRONIC CHARGE    760.00000   760.00000000
> 
> :ENE  : *INFO***** TOTAL ENERGY IN Ry =      -100252.25418212
> 
> :STRESS_GPa001:    246361.85088        0.00000        0.00000   partial
> :STRESS_GPa002:         0.00000   246361.85088        0.00000   partial
> :STRESS_GPa003:         0.00000        0.00000   246361.85088   partial
> 
> :PRESSURE:        246361.85088 GPa     partial
> 
> :FOR001:   1.ATOM          0.000          0.000          0.000 0.000 partial forces
> :FOR002:   2.ATOM          3.326          0.000          0.000 -3.326 partial forces
> 
> 
> _______________________________________________
> Wien mailing list
> Wien at zeus.theochem.tuwien.ac.at
> http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
> SEARCH the MAILING-LIST at:  http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html

Pascal Boulet
—
Professor in computational materials chemistry - DEPARTMENT OF CHEMISTRY
University of Aix-Marseille - Avenue Escadrille Normandie Niemen - F-13013 Marseille - FRANCE
Tél: +33(0)4 13 55 18 10 - Fax : +33(0)4 13 55 18 50
Email : pascal.boulet at univ-amu.fr <mailto:pascal.boulet at univ-amu.fr>




-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://zeus.theochem.tuwien.ac.at/pipermail/wien/attachments/20230320/6c73282f/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 1821 bytes
Desc: not available
URL: <http://zeus.theochem.tuwien.ac.at/pipermail/wien/attachments/20230320/6c73282f/attachment.p7s>


More information about the Wien mailing list