[Wien] Strategy for a large slab with SP and SOC
Peter Blaha
peter.blaha at tuwien.ac.at
Sat Jun 17 08:01:52 CEST 2023
Your 4 points are not really recommended in the first place.
If it is a scf convergence problem (which I doubt): grep:DIS case.scf
. Does it look like divergence ?
You need to find which eigenvalue causes the ghostband, from which atom
and angular momentum.
See *scf2* and *output2* files.
Once you know this, look into case.scf1 to see how the LOs and energy
parameters for this state are set and you probably have to modyfy
case.in1(c).
PS: Use init_lapw -prec 1n at the beginning, maybe with -ecut 0.999 .
PS: I would NOT include HDLOs, if I had ghostbands. Mixing with PRATT
helps only in very few cases, not really recommended for "normal"
calculations.
PPS: I hope you use runsp_c_lapw for something like WTe2 ?
Am 17.06.2023 um 00:28 schrieb pluto via Wien:
> Dear Prof. Blaha, dear All,
>
> Thank you for the comment on slab strategy, this helps a lot.
>
> I have more specific question: for a large WTe2 slab (60 atoms), which
> is a material of low-symmetry that has a polarity also in the
> out-of-plane direction, I am getting ghostbands in lapw2 after few
> iterations. What is a good strategy to fix this?
>
> I was thinking of:
>
> 1. init_lapw -hdlo
> 2. Low mixing (like 0.05) with PRATT
> 3. Decrease RMT (from first tests, with RMT 2.5 ghostbands seem to
> appear after around 3 iterations, with 2.2 after many iterations)
> 4. Increase RKmax
>
> 3 and 4 are probably computationally expensive...
>
> I did several tests without SOC, I was typically using something like:
>
> init_lapw -sp -b -numk 100 -hdlo -fermit 0.002
>
> Maybe other settings are critical?
>
> Bulk calculation converges very easily (first without and then with
> SOC) with default settings like
>
> init_lapw -sp -b -numk 2000
>
> bulk bands look like the literature, and are practically the same with
> RMT 2.2 and RMT 2.5.
>
> Best,
> Lukasz
>
>
>
>
> On 2023-06-16 16:45, Peter Blaha wrote:
>> No,this is not a good strategy.
>>
>> From a converged non-spin-polarized calculation you cannot come
>> (easily) to a spin-polarized solution.
>>
>> So 1) is only good if you want to quote how much more stable a SP
>> solution is compared to a non-SP.
>>
>> 2 + 3 is a good practice. You gain insight how large are the changes
>> and on what atoms due to SO coupling.
>>
>> ------------------------
>>
>> In terms of efficiency for large cases, I'd in particular preconverge
>> with a course k-mesh and later on refine.
>>
>> ---------------------------
>>
>> Every runsp cycle starts with a case.clmsum/up/dn file.
>>
>> These files can come from an initialization, but of course also from
>> any prior scf calculation (eg. with lower k-mesh or without SO). Of
>> course, a restore_lapw ... gives you all files necessary to run
>> another scf cycle.
>>
>> -NI would keep old broyden files, but after a "save_lapw" they are
>> gone anyway. -NI is useful if you want to continue a scf, because eg.
>> the first runsp stopped after 40 cycles and did not reach convergence
>> yet.
>>
>> Am 16.06.2023 um 10:44 schrieb pluto via Wien:
>>> Dear All,
>>>
>>> I just would like to confirm the step-by-step convergence strategy
>>> for the large slab with SP and SOC (it refers in general to
>>> spin-momentum locked non-magnetic TMDC, but can be any other material).
>>>
>>> Is the following correct:
>>>
>>> 1. Converge without SP and without SOC, and save_lapw e.g. as
>>> CONV_NO_SP_NO_SOC so it can be used in another directory or on
>>> another computer for the next steps
>>> 2. Use this as a starting point to converge with SP, and save_lapw
>>> as CONV_W_SP_NO_SOC (one can also restore_lapw in another directory
>>> and start there)
>>> 3. Use this as a starting point to converge with SP and with SOC
>>> (and save_lapw to have it for the future)
>>>
>>> I often start with step 3 right away, but I think for a really large
>>> system this might be really inefficient.
>>>
>>> How does the program know to use the starting density from the
>>> previous step?
>>> Does restore_lapw creates the necessary files when I transfer to the
>>> new directory?
>>> Is -NI or some other setting in run_lapw important here?
>>>
>>> At the moment I am using an older cluster with many cores and use
>>> k-parallel. Still didn't manage with MPI, but maybe it is not needed
>>> for what I want because my klist file is typically 50-80 k-points,
>>> depending on the symmetry of the system. I use the QTL program quite
>>> a lot so having it parallellized would sometimes speed the things up
>>> a bit for me.
>>>
>>> Best,
>>> Lukasz
>>> _______________________________________________
>>> Wien mailing list
>>> Wien at zeus.theochem.tuwien.ac.at
>>> http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
>>> SEARCH the MAILING-LIST at:
>>> http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html
> _______________________________________________
> Wien mailing list
> Wien at zeus.theochem.tuwien.ac.at
> http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
> SEARCH the MAILING-LIST at:
> http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html
--
-----------------------------------------------------------------------
Peter Blaha, Inst. f. Materials Chemistry, TU Vienna, A-1060 Vienna
Phone: +43-158801165300
Email: peter.blaha at tuwien.ac.at
WWW: http://www.imc.tuwien.ac.at WIEN2k: http://www.wien2k.at
-------------------------------------------------------------------------
More information about the Wien
mailing list