<div dir="ltr"><p class=""><span lang="EN-US">Thanks for all the help and comments.</span></p><p class=""><span lang="EN-US">I tried Oleg's suggestion and it works. I will go onto compare the performance of different parallelization settings on my system. </span></p><p class=""><span lang="EN-US">Fermin</span></p><p class=""><span lang="EN-US">-----------------------------------------------------------------------</span></p><p class=""><span lang="EN-US">-----------------------------------------------------------------------</span></p><p class=""><span lang="EN-US">-----Original Message-----<br>
From: <a href="mailto:wien-bounces@zeus.theochem.tuwien.ac.at">wien-bounces@zeus.theochem.tuwien.ac.at</a>
[mailto:<a href="mailto:wien-bounces@zeus.theochem.tuwien.ac.at">wien-bounces@zeus.theochem.tuwien.ac.at</a>] On Behalf Of Peter Blaha<br>
Sent: Wednesday, January 28, 2015 2:45 PM<br>
To: A Mailing list for WIEN2k users<br>
Subject: Re: [Wien] Job distribution problem in MPI+k point parallelization</span></p>
<p class=""><span lang="EN-US"> </span></p>
<p class=""><span lang="EN-US">Now it is rather clear why you had 8 mpi
jobs running previously.</span></p>
<p class=""><span lang="EN-US"> </span></p>
<p class=""><span lang="EN-US">The new definition of WIEN_MPIRUN and
also your pbs script seems ok and the jobs are now distributed as expected.</span></p>
<p class=""><span lang="EN-US"> </span></p>
<p class=""><span lang="EN-US">I do not know why you get only 50% in
this test. Maybe because the test is not suitable and requires so much
communication that the cpu cannot run at full speed.</span></p>
<p class=""><span lang="EN-US"> </span></p>
<p class=""><span lang="EN-US">As I said before, a setup with two mpi
jobs and 2 k-parallel jobs on a</span></p>
<p class=""><span lang="EN-US">4 core machine is a "useless"
setup. Parallelization is not a task which works in an "arbitrary"
way, but needs to be adapted to the hardware AND the physical problem.</span></p>
<p class=""><span lang="EN-US"> </span></p>
<p class=""><span lang="EN-US">Your task is now to compare
"timings" and find out the optimal setup for the specific problem and
the available hardware.</span></p>
<p class=""><span lang="EN-US"> </span></p>
<p class=""><span lang="EN-US">Run the same job with .machines file:</span></p>
<p class=""><span lang="EN-US">1:host:4</span></p>
<p class=""><span lang="EN-US"> </span></p>
<p class=""><span lang="EN-US">or</span></p>
<p class=""><span lang="EN-US">1:host</span></p>
<p class=""><span lang="EN-US">1:host</span></p>
<p class=""><span lang="EN-US">1:host</span></p>
<p class=""><span lang="EN-US">1:host</span></p>
<p class=""><span lang="EN-US"> </span></p>
<p class=""><span lang="EN-US">or setenv OMP_NUM_THREAD =2</span></p>
<p class=""><span lang="EN-US">1:host</span></p>
<p class=""><span lang="EN-US">1:host</span></p>
<p class=""><span lang="EN-US"> </span></p>
<p class=""><span lang="EN-US">and check which run is the fastest.</span></p>
<p class=""><span lang="EN-US"> -----------------------------------------------------------</span></p><p class=""><span lang="EN-US">-----------------------------------------------------------</span></p><p class=""><span lang="EN-US">-----Original Message-----<br>
From: <a href="mailto:wien-bounces@zeus.theochem.tuwien.ac.at">wien-bounces@zeus.theochem.tuwien.ac.at</a>
[mailto:<a href="mailto:wien-bounces@zeus.theochem.tuwien.ac.at">wien-bounces@zeus.theochem.tuwien.ac.at</a>] On Behalf Of Oleg Rubel<br>
Sent: Wednesday, January 28, 2015 12:42 PM<br>
To: A Mailing list for WIEN2k users<br>
Subject: Re: [Wien] Job distribution problem in MPI+k point parallelization</span></p><p class=""><span lang="EN-US"> </span></p><p class=""><span lang="EN-US">It might be unrelated, but worth a try.
I had a similar problem once with MVAPICH2. It was solved by setting up this
environment variable in the submission script</span></p><p class=""><span lang="EN-US"> </span></p><p class=""><span lang="EN-US">setenv MV2_ENABLE_AFFINITY 0</span></p><p class=""><span lang="EN-US"> </span></p><p class=""><span lang="EN-US">You can also check which core each
process is bound to using "taskset" command. The same command also
allows to change the affinity on fly.</span></p><p class=""><span lang="EN-US"> </span></p><p class=""><span lang="EN-US">I hope this will help</span></p><p class=""><span lang="EN-US">
</span></p><p class=""><span lang="EN-US">Oleg</span></p></div>