[Wien] OS bug/inconveniece with mkl thread/mpi conflicts

Peter Blaha pblaha at theochem.tuwien.ac.at
Tue Mar 15 16:35:51 CET 2011


Setting OMP_NUM_THREAD lets you select to use multithreading or not.

A slight modification of our example script on the WIEN2k faq page
can even accounts for this and automatically span on eg. 8 core only
4 mpi processes (when OMP_NUM_THREAD=2).


Am 15.03.2011 15:31, schrieb Laurence Marks:
> This is an informational report of an issue that can arise with Wien2k
> leading to poor performance. I came across it when experimenting with
> the "procs=XX" option in msub/qsub with XX=64 as an alternative to the
> more standard "nodes=X:ppn=Y".
>
> In an scf cycle lapwdm and (with the next release) mixer always use
> lapack routines (and maybe some others) which by default with mkl are
> multithreaded and use multiple cores. For a mpi run with lapw0:
> specified lapw0, lapw1 and lapw2 all use mpi. A issue can arise with
> msub/qsub (and perhaps other) if the number of cores mkl uses is
> larger than what has been allocated and there are other mpi tasks
> running on the same node. Then the performance of lapwdm can be bad;
> in one case rather than taking 30 seconds it took 14 minutes.
>
> I am not sure if this is something "fixable" in WIen2k, and there are
> ways in qsub/msub to avoid it using nodes=X:ppn=Y. However, procs=XX
> is "nice" as Wien2k in mpi does not really need factors of 2
> cores/node (I've tested this) and can fit in the gaps.
>
> N.B., running without mpi and using nodes=X:ppn=1 could also run into
> the same problem.
>

-- 

                                       P.Blaha
--------------------------------------------------------------------------
Peter BLAHA, Inst.f. Materials Chemistry, TU Vienna, A-1060 Vienna
Phone: +43-1-58801-15671             FAX: +43-1-58801-15698
Email: blaha at theochem.tuwien.ac.at    WWW: http://info.tuwien.ac.at/theochem/
--------------------------------------------------------------------------


More information about the Wien mailing list