[Wien] Job distribution problem in MPI+k point parallelization
lung Fermin
ferminlung at gmail.com
Thu Jan 29 03:19:24 CET 2015
Thanks for all the help and comments.
I tried Oleg's suggestion and it works. I will go onto compare the
performance of different parallelization settings on my system.
Fermin
-----------------------------------------------------------------------
-----------------------------------------------------------------------
-----Original Message-----
From: wien-bounces at zeus.theochem.tuwien.ac.at [mailto:
wien-bounces at zeus.theochem.tuwien.ac.at] On Behalf Of Peter Blaha
Sent: Wednesday, January 28, 2015 2:45 PM
To: A Mailing list for WIEN2k users
Subject: Re: [Wien] Job distribution problem in MPI+k point parallelization
Now it is rather clear why you had 8 mpi jobs running previously.
The new definition of WIEN_MPIRUN and also your pbs script seems ok and the
jobs are now distributed as expected.
I do not know why you get only 50% in this test. Maybe because the test is
not suitable and requires so much communication that the cpu cannot run at
full speed.
As I said before, a setup with two mpi jobs and 2 k-parallel jobs on a
4 core machine is a "useless" setup. Parallelization is not a task which
works in an "arbitrary" way, but needs to be adapted to the hardware AND
the physical problem.
Your task is now to compare "timings" and find out the optimal setup for
the specific problem and the available hardware.
Run the same job with .machines file:
1:host:4
or
1:host
1:host
1:host
1:host
or setenv OMP_NUM_THREAD =2
1:host
1:host
and check which run is the fastest.
-----------------------------------------------------------
-----------------------------------------------------------
-----Original Message-----
From: wien-bounces at zeus.theochem.tuwien.ac.at [mailto:
wien-bounces at zeus.theochem.tuwien.ac.at] On Behalf Of Oleg Rubel
Sent: Wednesday, January 28, 2015 12:42 PM
To: A Mailing list for WIEN2k users
Subject: Re: [Wien] Job distribution problem in MPI+k point parallelization
It might be unrelated, but worth a try. I had a similar problem once with
MVAPICH2. It was solved by setting up this environment variable in the
submission script
setenv MV2_ENABLE_AFFINITY 0
You can also check which core each process is bound to using "taskset"
command. The same command also allows to change the affinity on fly.
I hope this will help
Oleg
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://zeus.theochem.tuwien.ac.at/pipermail/wien/attachments/20150129/a4ee2975/attachment.html>
More information about the Wien
mailing list