<div dir="ltr"><p class="MsoNormal">I have checked with the MPIRUN option. I used </p><p class="MsoNormal"><br></p><p class="MsoNormal">setenv WIEN_MPIRUN "/usr/local/mvapich2-icc/bin/mpirun -hostfile $PBS_NODEFILE _EXEC_"<br></p><p class="MsoNormal"><font face="Tahoma, sans-serif"><span style="font-size:13.3333330154419px"><b><br></b></span></font></p><p class="MsoNormal">before. Now I changed the hostfile to _HOSTS_ instead of $PBS_NODEFILE. I can get 4 lapw1_mpi running. However, the CPU usage of each of the job is still only 50% (I use "top" to check this). Why is this the case? What could I do in order to get CPU usage of 100%? (OMP_NUM_THREAD=1 and in the .machine1 and .machine2 file I have two lines of node1)</p><p class="MsoNormal"><br></p><p class="MsoNormal">In the pure MPI case, using the .machines file as</p><p class="MsoNormal">#</p><p class="MsoNormal">1:node1 node1 node1 node1</p><p class="">granularity:1<br></p><p class=""><span lang="EN-US">extrafine:1</span></p><p class="">#</p><p class="MsoNormal">I can get 4 lapw1_mpi running with 100% CPU usage. How shall I understand this situation?</p><p class="MsoNormal"><br></p><p class="MsoNormal">The following are some details on the options and the system I used:</p><p class="MsoNormal">1. Wien2k_14.2, mpif90 (compiled with ifort) for MVAPICH2 version 2.0</p><p class="MsoNormal"><br></p><p class="MsoNormal"><font face="Tahoma, sans-serif"><span style="font-size:13.3333330154419px">2. The batch system is PBS and the script I used for qsub: </span></font></p><p class="MsoNormal"><font face="Tahoma, sans-serif"><span style="font-size:13.3333330154419px">#</span></font></p><p class="MsoNormal">#!/bin/tcsh</p><p class="MsoNormal"><font face="Tahoma, sans-serif"><span style="font-size:13.3333330154419px">#PBS -l nodes=1:ppn=4</span></font></p><p class="MsoNormal"><font face="Tahoma, sans-serif"><span style="font-size:13.3333330154419px">#PBS -l walltime=00:30:00</span></font></p><p class="MsoNormal"><font face="Tahoma, sans-serif"><span style="font-size:13.3333330154419px">#PBS -q node1</span></font></p><p class="MsoNormal"><font face="Tahoma, sans-serif"><span style="font-size:13.3333330154419px">#PBS -o wien2k_output</span></font></p><p class="MsoNormal"></p><p class="MsoNormal"><font face="Tahoma, sans-serif"><span style="font-size:13.3333330154419px">#PBS -j oe</span></font></p><div><br></div><p class="MsoNormal"><font face="Tahoma, sans-serif"><span style="font-size:13.3333330154419px">cd $PBS_O_WORKDIR</span></font></p><p class="MsoNormal"><font face="Tahoma, sans-serif"><span style="font-size:13.3333330154419px">limit vmemoryuse unlimited</span></font></p><p class="MsoNormal"><font face="Tahoma, sans-serif"><span style="font-size:13.3333330154419px"> </span></font></p><p class="MsoNormal"><font face="Tahoma, sans-serif"><span style="font-size:13.3333330154419px">#set how many cores to be used for each mpi job</span></font></p><p class="MsoNormal"><font face="Tahoma, sans-serif"><span style="font-size:13.3333330154419px">set mpijob=2</span></font></p><p class="MsoNormal"><font face="Tahoma, sans-serif"><span style="font-size:13.3333330154419px"> </span></font></p><p class="MsoNormal"><font face="Tahoma, sans-serif"><span style="font-size:13.3333330154419px">set proclist=`cat $PBS_NODEFILE `</span></font></p><p class="MsoNormal"><font face="Tahoma, sans-serif"><span style="font-size:13.3333330154419px">echo $proclist</span></font></p><p class="MsoNormal"><font face="Tahoma, sans-serif"><span style="font-size:13.3333330154419px">set nproc=$#proclist</span></font></p><p class="MsoNormal"></p><p class="MsoNormal"><font face="Tahoma, sans-serif"><span style="font-size:13.3333330154419px">echo number of processors: $nproc</span></font></p><p class="MsoNormal"><font face="Tahoma, sans-serif"><span style="font-size:13.3333330154419px"><br></span></font></p><p class="MsoNormal"><font face="Tahoma, sans-serif"><span style="font-size:13.3333330154419px">#---------- writing .machines file ------------------</span></font></p><p class="MsoNormal"></p><p class="MsoNormal"><font face="Tahoma, sans-serif"><span style="font-size:13.3333330154419px">echo '#' > .machines</span></font></p><div><div> set i=1</div><div> while ($i <= $nproc )</div><div> echo -n '1:' >>.machines</div><div> @ i1 = $i + $mpijob</div><div> @ i2 = $i1 - 1</div><div> echo $proclist[$i-$i2] >>.machines</div><div> set i=$i1</div><div> end</div><div>echo 'granularity:1' >>.machines</div><div>echo 'extrafine:1' >>.machines</div></div><div># --------- end of .machines file<br></div><div><br></div><div>run_lapw -p -i 40 -cc 0.0001 -ec 0.00001<br></div><div>###</div><div><br></div><div>3. The .machines file:</div><div><div>#</div><div>1:node1 node1</div><div>1:node1 node1</div><div>granularity:1</div><div>extrafine:1</div></div><div><br></div><div>and .machine1 and .machine2 files are both</div><div>node1</div><div>node1</div><div><br></div><div><br></div><div>3. The parallel_options:</div><div><div>setenv TASKSET "no"</div><div>setenv USE_REMOTE 1</div><div>setenv MPI_REMOTE 0</div><div>setenv WIEN_GRANULARITY 1</div><div>setenv WIEN_MPIRUN "/usr/local/mvapich2-icc/bin/mpirun -np _NP_ -hostfile _HOSTS_ _EXEC_"</div></div><p class="MsoNormal"><font face="Tahoma, sans-serif"><span style="font-size:13.3333330154419px"><br></span></font></p><p class="MsoNormal"><font face="Tahoma, sans-serif"><span style="font-size:13.3333330154419px">4. The compiling options:</span></font></p><p class="MsoNormal"><font face="Tahoma, sans-serif"><span style="font-size:13.3333330154419px">current:FOPT:-FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML -traceback</span></font></p><p class="MsoNormal"><font face="Tahoma, sans-serif"><span style="font-size:13.3333330154419px">current:FPOPT:-FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML -Dmkl_scalapack -traceback</span></font></p><p class="MsoNormal"><font face="Tahoma, sans-serif"><span style="font-size:13.3333330154419px">current:FFTW_OPT:-DFFTW3 -I/usr/local/include</span></font></p><p class="MsoNormal"><font face="Tahoma, sans-serif"><span style="font-size:13.3333330154419px">current:FFTW_LIBS:-lfftw3_mpi -lfftw3 -L/usr/local/lib</span></font></p><p class="MsoNormal"><font face="Tahoma, sans-serif"><span style="font-size:13.3333330154419px">current:LDFLAGS:$(FOPT) -L/opt/intel/Compiler/11.1/046/mkl/lib/em64t -pthread</span></font></p><p class="MsoNormal"><font face="Tahoma, sans-serif"><span style="font-size:13.3333330154419px">current:DPARALLEL:'-DParallel'</span></font></p><p class="MsoNormal"><font face="Tahoma, sans-serif"><span style="font-size:13.3333330154419px">current:R_LIBS:-lmkl_lapack -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -openmp -lpthread -lguide</span></font></p><p class="MsoNormal"><font face="Tahoma, sans-serif"><span style="font-size:13.3333330154419px">current:RP_LIBS:-lmkl_scalapack_lp64 -lmkl_solver_lp64 -lmkl_blacs_intelmpi_lp64 $(R_LIBS)</span></font></p><p class="MsoNormal"><font face="Tahoma, sans-serif"><span style="font-size:13.3333330154419px">current:MPIRUN:/usr/local/mvapich2-icc/bin/mpirun -np _NP_ -hostfile _HOSTS_ _EXEC_</span></font></p><p class="MsoNormal"></p><p class="MsoNormal"><font face="Tahoma, sans-serif"><span style="font-size:13.3333330154419px">current:MKL_TARGET_ARCH:intel64</span></font></p><p class="MsoNormal"><font face="Tahoma, sans-serif"><span style="font-size:13.3333330154419px"><br></span></font></p><p class="MsoNormal"><font face="Tahoma, sans-serif"><span style="font-size:13.3333330154419px">Thanks,</span></font></p><p class="MsoNormal"><font face="Tahoma, sans-serif"><span style="font-size:13.3333330154419px">Fermin</span></font></p><p class="MsoNormal"><font face="Tahoma, sans-serif"><span style="font-size:13.3333330154419px"><b>------------------------------------------------------------</b></span></font></p><p class="MsoNormal"><font face="Tahoma, sans-serif"><span style="font-size:13.3333330154419px"><b>-------------------------------------------------------------</b></span></font></p><p class=""><span lang="EN-US">-----Original Message-----</span></p>
<p class=""><span lang="EN-US">From: <a href="mailto:wien-bounces@zeus.theochem.tuwien.ac.at">wien-bounces@zeus.theochem.tuwien.ac.at</a>
[<a href="mailto:wien-bounces@zeus.theochem.tuwien.ac.at">mailto:wien-bounces@zeus.theochem.tuwien.ac.at</a>]
On Behalf Of Peter Blaha</span></p>
<p class=""><span lang="EN-US">Sent: Tuesday, January 27, 2015 11:55 PM</span></p>
<p class=""><span lang="EN-US">To: A Mailing list for WIEN2k users</span></p>
<p class=""><span lang="EN-US">Subject: Re: [Wien] Job
distribution problem in MPI+k point parallelization</span></p>
<p class=""><span lang="EN-US"> </span></p>
<p class=""><span lang="EN-US">It should actually be only 4 lapw1_mpi
jobs running with this setup.</span></p>
<p class=""><span lang="EN-US"> </span></p>
<p class=""><span lang="EN-US">How did you find this: using
"top" or ps ???</span></p>
<p class=""><span lang="EN-US"> </span></p>
<p class=""><span lang="EN-US">Do you have thread-parallelization on ?
(OMP_NUM_THREAD=2 ???) Then it doubles the processes (but you gain nothing ...)</span></p>
<p class=""><span lang="EN-US"> </span></p>
<p class=""><span lang="EN-US">It could also be that your mpirun
definition is not ok with respect of you version of mpi, ...</span></p>
<p class=""><span lang="EN-US"> </span></p>
<p class=""><span lang="EN-US">PS: I hope it is clear, that such a
setup is useful only for testing.</span></p>
<p class=""><span lang="EN-US">the mpi-program on 2 cores is "slower/at
least not faster" than the sequential program on 1 core.</span></p>
<p class=""><span lang="EN-US"> </span></p>
<p class=""><span lang="EN-US">On 01/27/2015 04:41 PM, lung Fermin
wrote:</span></p>
<p class=""><span lang="EN-US">> Dear Wien2k community,</span></p>
<p class=""><span lang="EN-US">> </span></p>
<p class=""><span lang="EN-US">> Recently, I am trying to set up a
calculation of a system with ~40 </span></p>
<p class=""><span lang="EN-US">> atoms using MPI+k point
parallelization. Suppose in one single node, I </span></p>
<p class=""><span lang="EN-US">> want to calculate 2 k points, with
each k point using 2 processors to </span></p>
<p class=""><span lang="EN-US">> run mpi parallel. The .machines
file I used was #</span></p>
<p class=""><span lang="EN-US">> 1:node1 node1</span></p>
<p class=""><span lang="EN-US">> 1:node1 node1</span></p>
<p class=""><span lang="EN-US">> granularity:1</span></p>
<p class=""><span lang="EN-US">> extrafine:1</span></p>
<p class=""><span lang="EN-US">> #</span></p>
<p class=""><span lang="EN-US">> </span></p>
<p class=""><span lang="EN-US">> When I ssh into node1, I saw that
there were 8 lapw1_mpi running, each </span></p>
<p class=""><span lang="EN-US">> with CPU usage of 50%. Is this
natural or have I done something wrong?</span></p>
<p class=""><span lang="EN-US">> What I expect was having 4
lapw1_mpi running each with CPU usage of </span></p>
<p class=""><span lang="EN-US">> 100% instead. I am a newbei to mpi
parallelization. Please point me </span></p>
<p class=""><span lang="EN-US">> out if I have misunderstand
anything.</span></p>
<p class=""><span lang="EN-US">> </span></p>
<p class=""><span lang="EN-US">> Thanks in advance,</span></p>
<p class=""><span lang="EN-US">> Fermin</span></p>
<p class=""><span lang="EN-US">> </span></p>
<p class=""><span lang="EN-US">> </span></p>
<p class=""><span lang="EN-US">> </span></p></div>