[Wien] Confusion regarding the convergence of a supercell and the parallel calculation
Peter Blaha
pblaha at theochem.tuwien.ac.at
Sat Feb 13 11:28:28 CET 2021
Forces of 6 mRy/bohr are in general still small. You may do an
optimization, but unless you have some very weak bonds, not much will
happen.
Convergence and k-mesh: For larger cells, it is typical that people
start with MUCH TOO large RKMAX/k-meshes. Always start with small
parameters and checkout later.
Small params: RKMAX depends on your elements and RMT radii. You have
some ABX3 compound, so I suppose your smallest sphere is O(and you have
followed the recommendations by setrmt). In this case, I'd certainly
start with eg. RKMAX=6 (maybe even 5.5 for an even larger supercell).
All optimizations/forces, .. can be done with this value. At the end,
you increase RKMAX to 7 and checkout if eg. forces/ band gaps, moments,
... have changed.
Of course, if your smallest atom would be a 3d transition metal, you
should better start with RKMAX 7 - 7.5 and definitely check with 8 - 9.
Please checkout our faq page at www.wien2k.at on that topic.
k-mesh: for a 1-2 atom system, you would do 10000 - 500 k-points (metal
- insulator). If you have 80 atoms (and an insulator ?) you can divide
this number by ~80. So your first k-mesh could be something like 2x3x2.
Again, at the end (or for a DOS), you would probably double the mesh and
check your results. But in particular the forces will not change.
Parallelization: using OMP_NUM_THREADS=40 is complete nonsense.
A general hint for all NEW wien2k users:
READ THE USERSGUIDE (maybe even twice). Put it next to your bed and
every evening you read 10-20 pages !!! Only then you may understand
what the code can do and how to do it.
READ our www.wien2k.at site (registered users: faq pages,..;
workshops: look at the (basic) videos and do all the exercises yourself
In the UG it will tell you what the maximum OMP_NUM_THREAD could be.
Another tipp about parallelization: run a single scf cycle with
different OMP_NUM_THREADS and check out the timing in case.dayfile. Then
you know what your system can do.
Second parallelization is over k-points. Why do you use just 2 of your
cores ?? Use as many as you have k-points (for small k-grid).
In general: don't ask others for details (which they cannot answer
specifically, since they don't know your system. BUT: Make your OWN
COMPUTER EXPERIMENTS. Test different thing and analyse/evaluate what is
good/necessary/....
Am 13.02.2021 um 06:02 schrieb Anupriya Nyayban:
> Dear experts and users,
>
> I am trying to find the convergence for a 2*2*1 supercell of a doped
> orthorhombic ABX3 type structure (the system specification is provided
> below). The space group is Pnma for the pure structure and it changes to
> P-1 for the doped supercell. The supercell also consists of 80 atoms
> among which 40 are inhomogenes. The scf is calculated with "run_lapw -p
> -ec 0.0001 -cc 0.001 -fc 1". Forces on some atoms are more than 5
> mRy/a.u. (maximum 6.629 mRy/a.u.). Should we go for geometry
> minimization ? Whether the changed space group is correct or not.
> The scf is working fine for the RKmax=7 whereas lapw2 crashes (stating
> forrtl: severe (67): input statement requires too much data, unit 10)
> for Rkmax=8 and kmesh 4*9*5. Is it fine to proceed with RKmax=7?
>
>
>
> And also, the parallel calculation is running in a HPC (Processor: dual
> socket 18 core per socket intel skylake processor, RAM: 96 GB ECC DDR4
> 2133 MHz RAM in balanced configuration, Operating system: CentOS-7.3,
> using compiler/intel 2018.5.274). OMP_NUM_THREADS=40 is set in the
> bashrc file. The last few lines of .machine reads as
> "1: localhost
> 1:localhost
> granularity:1
> extrafine:1
> #uncomment for specific OMP_parallelization (overwriting a global
> OMP_NUM_THREAD)
> #omp_global:4
> #or use program-specific parallelization:
> #omp_lapw0:4
> #omp_lapw1:4
> #omp_lapw2:4
> #omp_lapwso:4
> #omp_dstart:4
> #omp_sumpara:4
> #omp_nlvdw:4"
> I am a little bit confused about how the parallel distribution is made
> and whether we are using the system maximally for the calculation or
> not. I am using parallel calculation for the first time.
>
> Looking forward to your valuable opinion.
>
>
> Thank you in advance.
>
>
>
> --
> With regards
> Anupriya Nyayban
> Ph.D. Scholar
> Department of Physics
> NIT Silchar
>
> _______________________________________________
> Wien mailing list
> Wien at zeus.theochem.tuwien.ac.at
> http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
> SEARCH the MAILING-LIST at: http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html
>
--
--------------------------------------------------------------------------
Peter BLAHA, Inst.f. Materials Chemistry, TU Vienna, A-1060 Vienna
Phone: +43-1-58801-165300 FAX: +43-1-58801-165982
Email: blaha at theochem.tuwien.ac.at WIEN2k: http://www.wien2k.at
WWW: http://www.imc.tuwien.ac.at
-------------------------------------------------------------------------
More information about the Wien
mailing list