<div dir="ltr">Dear Oleg Rubel,<div><br></div><div style> I agree with you ! This is the reason I asked for hints from someone that uses WIEN with mpiexec_mpt (to save efforts and time).</div><div style> Thank you again !</div>
<div style> All the best,</div><div style> Luis</div><div style><br></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">2013/7/11 Oleg Rubel <span dir="ltr"><<a href="mailto:orubel@lakeheadu.ca" target="_blank">orubel@lakeheadu.ca</a>></span><br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><p dir="ltr">Dear Luis,</p>
<p dir="ltr">It looks like the problem is not in Wien2k. I would recommend to make sure that you can get a list of host names correctly before proceeding with wien. There are slight difference between various mpi implementation in a way of passing the host name list.</p>
<span class="HOEnZb"><font color="#888888">
<p dir="ltr">Oleg</p></font></span><div class="HOEnZb"><div class="h5">
<div class="gmail_quote">On 2013-07-11 9:52 AM, "Luis Ogando" <<a href="mailto:lcodacal@gmail.com" target="_blank">lcodacal@gmail.com</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr">Dear Prof. Marks and Rubel,<div><br></div><div> Many thanks for your kind responses.</div><div> I am forwarding your messages to the computation center. As soon as I have any reply, I will contact you.</div>
<div><br></div><div> I know that they have other <span style="font-family:arial,sans-serif;font-size:13px">wrappers (Intel MPI, for example), but they argue that mpiexec_mpt is the optimized option.</span></div>
<div><font face="arial, sans-serif"> I really doubt that this option will succeed, because I am getting the following error message in case.dayfile (bold)</font></div><div><font face="arial, sans-serif"><br>
</font></div><div><font face="arial, sans-serif">================================================================================</font></div><div><font face="arial, sans-serif"><div>Calculating InPwurt15InPzb3 in /home/ice/proj/proj546/ogando/Wien/Calculos/InP/InPwurtInPzb/15camadasWZ+3ZB/InPwurt15InPzb3</div>
<div>on r1i0n8 with PID 6433</div><div>using WIEN2k_12.1 (Release 22/7/2012) in /home/ice/proj/proj546/ogando/RICARDO2/wien/src</div><div><br></div><div><br></div><div> start <span style="white-space:pre-wrap">        </span>(Wed Jul 10 13:29:42 BRT 2013) with lapw0 (150/99 to go)</div>
<div><br></div><div> cycle 1 <span style="white-space:pre-wrap">        </span>(Wed Jul 10 13:29:42 BRT 2013) <span style="white-space:pre-wrap">        </span>(150/99 to go)</div><div><br></div><div>> lapw0 -grr -p<span style="white-space:pre-wrap">        </span>(13:29:42) starting parallel lapw0 at Wed Jul 10 13:29:42 BRT 2013</div>
<div>-------- .machine0 : 12 processors</div><div><b>mpiexec_mpt error: -machinefile option not supported.</b></div><div>0.016u 0.008s 0:00.40 2.5%<span style="white-space:pre-wrap">        </span>0+0k 0+176io 0pf+0w</div><div>
error: command /home/ice/proj/proj546/ogando/RICARDO2/wien/src/lapw0para -c lapw0.def failed</div><div><br></div><div>> stop error</div><div>================================================================================<br>
</div><div><br></div><div> Related to -sgi option, I am using -pbs option because PBS is the queueing system. As I said, I works well for parallel execution that uses just one node.</div><div> Many thanks again,</div>
<div> Luis</div><div><br></div></font></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">2013/7/11 Oleg Rubel <span dir="ltr"><<a href="mailto:orubel@lakeheadu.ca" target="_blank">orubel@lakeheadu.ca</a>></span><br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Dear Luis,<br>
<br>
Can you run other MPI codes under SGI scheduler on your cluster? In any case, I would suggest first to try the simplest check<br>
<br>
mpiexec -n $NSLOTS hostname<br>
<br>
this is what we use for Wien2k<br>
<br>
mpiexec -machinefile _HOSTS_ -n _NP_ _EXEC_<br>
<br>
the next line is also useful to ensure a proper CPU load<br>
<br>
setenv MV2_ENABLE_AFFINITY 0<br>
<br>
<br>
I hope this will help<br>
Oleg<div><div><br>
<br>
On 13-07-11 8:32 AM, Luis Ogando wrote:<br>
</div></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div>
Dear WIEN2k community,<br>
<br>
I am trying to use WIEN2k 12.1 in a SGI cluster. When I perform<br>
parallel calculations using just "one" node, I can use mpirun and<br>
everything goes fine (many thanks to Prof. Marks and his SRC_mpiutil<br>
directory).<br>
On the other hand, when I want to use more than one node, I have to<br>
use mpiexec_mpt and the calculation fails. I also tried the mpirun for<br>
more than one node, but this is not the proper way in a SGI system and I<br>
did not succeed.<br>
Well, I would like to know if anyone has experience in using WIEN2k<br>
with mpiexec_mpt and could give me any hint.<br>
I can give more information. This is only an initial ask for help.<br>
All the best,<br>
Luis<br>
<br>
<br>
<br></div></div>
______________________________<u></u>_________________<br>
Wien mailing list<br>
<a href="mailto:Wien@zeus.theochem.tuwien.ac.at" target="_blank">Wien@zeus.theochem.tuwien.ac.<u></u>at</a><br>
<a href="http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien" target="_blank">http://zeus.theochem.tuwien.<u></u>ac.at/mailman/listinfo/wien</a><br>
SEARCH the MAILING-LIST at: <a href="http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html" target="_blank">http://www.mail-archive.com/<u></u>wien@zeus.theochem.tuwien.ac.<u></u>at/index.html</a><br>
<br>
</blockquote>
______________________________<u></u>_________________<br>
Wien mailing list<br>
<a href="mailto:Wien@zeus.theochem.tuwien.ac.at" target="_blank">Wien@zeus.theochem.tuwien.ac.<u></u>at</a><br>
<a href="http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien" target="_blank">http://zeus.theochem.tuwien.<u></u>ac.at/mailman/listinfo/wien</a><br>
SEARCH the MAILING-LIST at: <a href="http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html" target="_blank">http://www.mail-archive.com/<u></u>wien@zeus.theochem.tuwien.ac.<u></u>at/index.html</a><br>
</blockquote></div><br></div>
<br>_______________________________________________<br>
Wien mailing list<br>
<a href="mailto:Wien@zeus.theochem.tuwien.ac.at" target="_blank">Wien@zeus.theochem.tuwien.ac.at</a><br>
<a href="http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien" target="_blank">http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien</a><br>
SEARCH the MAILING-LIST at: <a href="http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html" target="_blank">http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html</a><br>
<br></blockquote></div>
</div></div><br>_______________________________________________<br>
Wien mailing list<br>
<a href="mailto:Wien@zeus.theochem.tuwien.ac.at">Wien@zeus.theochem.tuwien.ac.at</a><br>
<a href="http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien" target="_blank">http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien</a><br>
SEARCH the MAILING-LIST at: <a href="http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html" target="_blank">http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html</a><br>
<br></blockquote></div><br></div>