<div dir="auto">One important correction: none of what you described uses openmp<div dir="auto"> (OMP), it is all mpi. In the most recent Wien2k openmp is controlled by omp_X commands in the .machines file.<br><br><div data-smartmail="gmail_signature" dir="auto">_____<br>Professor Laurence Marks<br>"Research is to see what everybody else has seen, and to think what nobody else has thought", Albert Szent-Gyorgi<br><a href="http://www.numis.northwestern.edu">www.numis.northwestern.edu</a></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Oct 15, 2020, 13:45 Christian Søndergaard Pedersen <<a href="mailto:chrsop@dtu.dk">chrsop@dtu.dk</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div>
<div id="m_39084929715362615divtagdefaultwrapper" style="font-size:12pt;color:#000000;font-family:Calibri,Helvetica,sans-serif" dir="ltr">
<p>Deat Professors Blaha and Marks</p>
<p><br>
</p>
<p>Thank you kindly for your explanations, the code is running smoothly - and efficiently! - now. For the sake of future WIEN2k newbies, I will summarize the mistakes I made; hopefully this can save someone else's time:</p>
<p><br>
</p>
<p>1: calling 'mpirun run_lapw -p'</p>
<p>As Professor Marks explained, calling mpirun explicitly led to overload of the compute node I was using. mpirun spawned one process per CPU on the node, each of which spawned additional processes as specified in the .machines file. The correct way is calling
'run_lapw -p'.</p>
<p><br>
</p>
<p>2: Issuing only one job in the .machines file, regardless of how many nodes/cores the job was using. For instance, for a Xeon16 node, I would write:</p>
<p><br>
</p>
<p>1:node1:4<br>
</p>
<p><br>
</p>
<p>which uses 4 cores for lapw1/lapw2 while leaving the remaining 12 cores idle. I corrected this to:</p>
<p><br>
</p>
<p>1:node1:4</p>
<p>1:node1:4</p>
<p>1:node1:4</p>
<p>1:node1:4<br>
</p>
<p><br>
</p>
<p>... which works, and which is explained in the example for OMP on page 86 of the manual.</p>
<p><br>
</p>
<p>Best regards</p>
<p>Christian<br>
</p>
</div>
<hr style="display:inline-block;width:98%">
<div id="m_39084929715362615divRplyFwdMsg" dir="ltr"><font face="Calibri, sans-serif" style="font-size:11pt" color="#000000"><b>Fra:</b> Wien <<a href="mailto:wien-bounces@zeus.theochem.tuwien.ac.at" target="_blank" rel="noreferrer">wien-bounces@zeus.theochem.tuwien.ac.at</a>> på vegne af Laurence Marks <<a href="mailto:laurence.marks@gmail.com" target="_blank" rel="noreferrer">laurence.marks@gmail.com</a>><br>
<b>Sendt:</b> 15. oktober 2020 17:15:30<br>
<b>Til:</b> A Mailing list for WIEN2k users<br>
<b>Emne:</b> Re: [Wien] .machines for several nodes</font>
<div> </div>
</div>
<div>
<div dir="ltr">
<div class="gmail_default" style="font-family:verdana,sans-serif;color:#000000">Let me expand why not to use mpirun yourself, unless you are doing something "special"</div>
<div class="gmail_default" style="font-family:verdana,sans-serif;color:#000000"><br>
</div>
<div class="gmail_default" style="font-family:verdana,sans-serif;color:#000000">Wien2k uses the .machines file to setup how to use mpi and (in the most recent versions) omp. As discussed by Peter, in most cases mpi is best with close to square matrices and
often powers of 2. OMP is good for having 2-4 cores collaborate, not more. Depending upon your architecture OMP may be better than mpi or worse. (On my nodes mpi is always best; I know on some of Peter's that OMP is better.)</div>
<div class="gmail_default" style="font-family:verdana,sans-serif;color:#000000"><br>
</div>
<div class="gmail_default" style="font-family:verdana,sans-serif;color:#000000">The code internally sets the number of threads to use (for omp), and will call mpirun or its equivalent depending upon what you have in parallel_options. While many codes/programs
are structured so they operate mpi via "mpirun MyCode", Wien2k does not. The danger is that you will end up with multiple copies of run_lapw running which is not what you want.</div>
<div class="gmail_default" style="font-family:verdana,sans-serif;color:#000000"><br>
</div>
<div class="gmail_default" style="font-family:verdana,sans-serif;color:#000000">There might be special cases where you would want to use "mpirun run_lapw" to remotely start a single version, but until you know how to use Wien2k do not go this complicated, it
is likely to create problems.</div>
<div class="gmail_default" style="font-family:verdana,sans-serif;color:#000000"><br>
</div>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Thu, Oct 15, 2020 at 5:01 AM Laurence Marks <<a href="mailto:laurence.marks@gmail.com" target="_blank" rel="noreferrer">laurence.marks@gmail.com</a>> wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div dir="auto">
<div>As an addendum to what Peter said, "mpirun run_lapw" is totally wrong. Remove the mpirun.<br>
<br>
<div>_____<br>
Professor Laurence Marks<br>
"Research is to see what everybody else has seen, and to think what nobody else has thought", Albert Szent-Gyorgi<br>
<a href="http://www.numis.northwestern.edu" target="_blank" rel="noreferrer">www.numis.northwestern.edu</a></div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Thu, Oct 15, 2020, 03:35 Peter Blaha <<a href="mailto:pblaha@theochem.tuwien.ac.at" target="_blank" rel="noreferrer">pblaha@theochem.tuwien.ac.at</a>> wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Well, 99% cpu efficiency does not mean that you run efficiently, but my <br>
estimat is that you run at least 2 times slower than what is possible.<br>
<br>
Anyway, please save the dayfile and compare the wall time of the <br>
different parts with a different setup.<br>
<br>
At least now we know that you have 24 cores/node. So the lapw0/dstart <br>
lines are perfectly ok.<br>
<br>
However, lapw1 you run on 3 mpi cores. This is "maximally inefficient". <br>
This gives a division of your matrix into 3x1, but it should be as close <br>
as possible to an even decomposition. So 4x4=16 or 8x8=64 cores is <br>
optimal. With your 24 cores and 96 atom/cell I'd probably go for 12 <br>
cores in mpi and 2-kparallel jobs per node:<br>
<br>
1:x073:12<br>
1:x082:12<br>
1:x073:12<br>
1:x082:12<br>
<br>
Maybe one can even overload the nodes a bit using 16 instead of 12 <br>
cores, but this could be dangerous on some machines because of your <br>
admins might have forced cpu-binding, .... (You can even change the <br>
.machines file (12-->16) "by hand" while your job is running (and maybe <br>
change it back once you have seen whether timing is better or worse).<br>
<br>
In any case, compare the timeings in the dayfile in order to find the <br>
optimal setup.<br>
_______________________________________________<br>
Wien mailing list<br>
<a href="mailto:Wien@zeus.theochem.tuwien.ac.at" rel="noreferrer noreferrer" target="_blank">Wien@zeus.theochem.tuwien.ac.at</a><br>
<a href="https://urldefense.com/v3/__http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien__;!!Dq0X2DkFhyF93HkjWTBQKhk!A8sB9-qFfbOGiCLPnA6iSE84ZZQy6mW4l0zuzz3NpWm1Wmn2GKqNPUMWg1UBjmQOGPID6g$" rel="noreferrer noreferrer noreferrer" target="_blank">https://urldefense.com/v3/__http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien__;!!Dq0X2DkFhyF93HkjWTBQKhk!A8sB9-qFfbOGiCLPnA6iSE84ZZQy6mW4l0zuzz3NpWm1Wmn2GKqNPUMWg1UBjmQOGPID6g$</a>
<br>
SEARCH the MAILING-LIST at: <a href="https://urldefense.com/v3/__http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html__;!!Dq0X2DkFhyF93HkjWTBQKhk!A8sB9-qFfbOGiCLPnA6iSE84ZZQy6mW4l0zuzz3NpWm1Wmn2GKqNPUMWg1UBjmSyxhK3Ng$" rel="noreferrer noreferrer noreferrer" target="_blank">
https://urldefense.com/v3/__http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html__;!!Dq0X2DkFhyF93HkjWTBQKhk!A8sB9-qFfbOGiCLPnA6iSE84ZZQy6mW4l0zuzz3NpWm1Wmn2GKqNPUMWg1UBjmSyxhK3Ng$</a>
<br>
</blockquote>
</div>
</div>
</div>
</blockquote>
</div>
<br clear="all">
<div><br>
</div>
-- <br>
<div dir="ltr">
<div dir="ltr">Professor Laurence Marks<br>
Department of Materials Science and Engineering<br>
Northwestern University<br>
<a href="http://www.numis.northwestern.edu/" target="_blank" rel="noreferrer">www.numis.northwestern.edu</a>
<div>Corrosion in 4D: <a href="http://www.numis.northwestern.edu/MURI" target="_blank" rel="noreferrer">
www.numis.northwestern.edu/MURI</a><br>
Co-Editor, Acta Cryst A<br>
"Research is to see what everybody else has seen, and to think what nobody else has thought"<br>
Albert Szent-Gyorgi</div>
</div>
</div>
</div>
</div>
_______________________________________________<br>
Wien mailing list<br>
<a href="mailto:Wien@zeus.theochem.tuwien.ac.at" target="_blank" rel="noreferrer">Wien@zeus.theochem.tuwien.ac.at</a><br>
<a href="https://urldefense.com/v3/__http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien__;!!Dq0X2DkFhyF93HkjWTBQKhk!E486wu9mjEojVuBx9BmE4_SFsnfFc15_HVRm2cgBlrXVqtPX4yfLBohlMh65-_6IQmAb0g$" rel="noreferrer noreferrer" target="_blank">https://urldefense.com/v3/__http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien__;!!Dq0X2DkFhyF93HkjWTBQKhk!E486wu9mjEojVuBx9BmE4_SFsnfFc15_HVRm2cgBlrXVqtPX4yfLBohlMh65-_6IQmAb0g$</a> <br>
SEARCH the MAILING-LIST at: <a href="https://urldefense.com/v3/__http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html__;!!Dq0X2DkFhyF93HkjWTBQKhk!E486wu9mjEojVuBx9BmE4_SFsnfFc15_HVRm2cgBlrXVqtPX4yfLBohlMh65-_6nK9GYkg$" rel="noreferrer noreferrer" target="_blank">https://urldefense.com/v3/__http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html__;!!Dq0X2DkFhyF93HkjWTBQKhk!E486wu9mjEojVuBx9BmE4_SFsnfFc15_HVRm2cgBlrXVqtPX4yfLBohlMh65-_6nK9GYkg$</a> <br>
</blockquote></div>