[Wien] allocating number of cores to a job

Peter Blaha pblaha at theochem.tuwien.ac.at
Tue Jan 28 11:08:58 CET 2020


Are you sure you have 12 k-points  in case.klist. Please check.

Of course, with 12 k-points you can only use 12 k-parallel jobs (and not 
16). If you have 12 k-points, it should run 12 fold k-parallel.

How did you find out that only 4 cores were used ???
Check with "top" and your case.dayfile

PS: The most simple way to further utilize your big machine is to 
uncomment the line:

#omp_global:4

Then each k-parallel job will use (up to) 4 cores and you should see a 
further speedup of a factor of about 3.

For bigger cases (we don't know how many atoms/cell you have), 
mpi-parallelization could also be very useful.

You need to read the usersguide (help_lapw) - parallelization

On 1/28/20 10:51 AM, Ali Baghizhadeh wrote:
> Dear Wien Users
> 
> I am running spin polarized calculations in a hexagonal system, on a 
> machine with AMD Threadripper, 64 processors (one thread per core), 
> linux system, gfortran compiler. The Wien2K was installed with parallel 
> option (no idea which one, as in manual 3 options are introduced). When 
> I did run calculations with K-mesh: 5x5x3 (12 K points), RKmax:-6.5, 
> only 4 cores were used. As said in the manual and mailing list, I added 
> 16 lines of “1:localhost” which you can see the content of .machines 
> file below, assuming that the calculation will run on 16 cores. But 
> again 4 cores were used.
> 
> I wish to get some comments how to dedicate certain number of cores to a 
> specific job.
> 
> Thank you in advance.
> 
> //
> 
> /Ali Baghi zadeh/
> 
> /Postdoctoral fellow/
> 
> /CICECO Institute of Materials, University of Aveiro/
> 
> /Portugal /
> 
> The .machines file in the folder I have saved my structural file and 
> performing calculations.
> 
> # .machines is the control file for parallel execution. Add lines like
> 
> #
> 
> #   speed:machine_name
> 
> #
> 
> # for each machine specifying there relative speed. For mpi 
> parallelization use
> 
> #
> 
> #   speed:machine_name:1 machine_name:1
> 
> #   lapw0:machine_name:1 machine_name:1
> 
> #
> 
> # further options are:
> 
> #
> 
> #   granularity:number (for loadbalancing on irregularly used machines)
> 
> #   residue:machine_name  (on shared memory machines)
> 
> #   extrafine         (to distribute the remaining k-points one after 
> the other)
> 
> #
> 
> # granularity sets the number of files that will be approximately
> 
> # be generated by each processor; this is used for load-balancing.
> 
> # On very homogeneous systems set number to 1
> 
> # if after distributing the k-points to the various machines residual
> 
> # k-points are left, they will be distributed to the residual-machine_name.
> 
> #
> 
> 1:localhost
> 
> 1:localhost
> 
> 1:localhost
> 
> 1:localhost
> 
> 1:localhost
> 
> 1:localhost
> 
> 1:localhost
> 
> 1:localhost
> 
> 1:localhost
> 
> 1:localhost
> 
> 1:localhost
> 
> 1:localhost
> 
> 1:localhost
> 
> 1:localhost
> 
> 1:localhost
> 
> 1:localhost
> 
> granularity:1
> 
> extrafine:1
> 
> #
> 
> # Uncomment for specific OMP-parallelization (overwriting a global 
> OMP_NUM_THREADS)
> 
> #
> 
> #omp_global:4
> 
> # or use program-specific parallelization:
> 
> #omp_lapw0:4
> 
> #omp_lapw1:4
> 
> #omp_lapw2:4
> 
> #omp_lapwso:4
> 
> #omp_dstart:4
> 
> #omp_sumpara:4
> 
> #omp_nlvdw:4
> 
> Also in the file , “parallel options”, I see following information:
> 
> setenv TASKSET "no"
> 
> if ( ! $?USE_REMOTE ) setenv USE_REMOTE 0
> 
> if ( ! $?MPI_REMOTE ) setenv MPI_REMOTE 0
> 
> setenv WIEN_GRANULARITY 1
> 
> setenv DELAY 0.1
> 
> setenv SLEEPY 1
> 
> setenv WIEN_MPIRUN "mpirun -np _NP_ -machinefile _HOSTS_ _EXEC_"
> 
> setenv CORES_PER_NODE 1
> 
> 
> _______________________________________________
> Wien mailing list
> Wien at zeus.theochem.tuwien.ac.at
> http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
> SEARCH the MAILING-LIST at:  http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html
> 

-- 

                                       P.Blaha
--------------------------------------------------------------------------
Peter BLAHA, Inst.f. Materials Chemistry, TU Vienna, A-1060 Vienna
Phone: +43-1-58801-165300             FAX: +43-1-58801-165982
Email: blaha at theochem.tuwien.ac.at    WIEN2k: http://www.wien2k.at
WWW:   http://www.imc.tuwien.ac.at/TC_Blaha
--------------------------------------------------------------------------


More information about the Wien mailing list