<div dir="ltr">Dear Prof. Marks,<div><br></div><div style> First of all, thank you very much for your help !</div><div style> Unfortunately, your suggestions did not work in my SGI system. Despite of this, I have now WIEN2k working in parallel even when more than one node is used. My solution where to install OpenMPI with ifort and icc in the SGI machine and use them to compile and run WIEN2k.</div>
<div style> We saw that mpiexec-mpt does not allow the use of a "machinefile" built by the user (at least, this can not be done by a beginner like me). As the Intel MPI is installed by the vendor (SGI team), I believe that it is somehow configured in a similar way. As a result, when I tried the compilation and execution with Intel MPI, I got some error messages complaining about the -machinefile option. When I tried your suggestion of compiling with Intel MPI but using the hopen file to launch the job with OpenMPI, the error messages complained about the <span style="font-family:arial,sans-serif;font-size:13px">-bootstrap-exec option.</span></div>
<div style> Well, it looks like that the best option is to use compilers and MPI softwares not optimized for an specific system by others.</div><div style> Thank you again !</div><div style> All the best,</div><div style>
Luis</div><div style>PS: in the parallel_options file, I had to set the complete path for the OpenMPI mpirun, despite of defining it in my .bashrc</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">
2013/8/3 Laurence Marks <span dir="ltr"><<a href="mailto:L-marks@northwestern.edu" target="_blank">L-marks@northwestern.edu</a>></span><br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
I am not sure if I can give you the right answer; My guess is to have<br>
it as 1, but I do not know all the details of your system and if I<br>
remember right you have an sgi system. Try both, then let us/me know<br>
what works (or does not).<br>
<br>
For reference, I have it working fine with USE_REMOTE 1, and I don't<br>
currently want to change to test (particularly as I am on travel).<br>
<div class="im HOEnZb"><br>
On Fri, Aug 2, 2013 at 8:36 AM, Luis Ogando <<a href="mailto:lcodacal@gmail.com">lcodacal@gmail.com</a>> wrote:<br>
> Dear Prof. Marks,<br>
><br>
</div><div class="im HOEnZb">> Just a quick question : in case that the openmpi launcher replaces ssh,<br>
> should I change USE_REMOTE to 0 in a cluster ?<br>
> Thank you one more time,<br>
> Luis<br>
><br>
><br>
><br>
> 2013/7/27 Laurence Marks <<a href="mailto:L-marks@northwestern.edu">L-marks@northwestern.edu</a>><br>
>><br>
</div><div class="HOEnZb"><div class="h5">>> WARNING 1: To be used with care, and customized as needed<br>
>> WARNING 2: Valid for impi and perhaps other, but not all variants<br>
>> WARNING 3: Please look at what these options mean...<br>
>><br>
>> My parallel_options file with NU's supercomputer, which contains<br>
>> various debug and other options (some recommended by Intel, some by<br>
>> the local sys_admin):<br>
>><br>
>> setenv USE_REMOTE 1<br>
>> setenv MPI_REMOTE 0<br>
>> setenv WIEN_GRANULARITY 1<br>
>> setenv DAPL_DBG_TYPE 0<br>
>> # Normal<br>
>> #setenv WIEN_MPIRUN "mpirun -n _NP_ -machinefile _HOSTS_ _EXEC_ "<br>
>><br>
>> # To turn on verbose<br>
>> #setenv WIEN_MPIRUN "mpirun -bootstrap-exec ~/bin/hssh -n _NP_<br>
>> -machinefile _HOSTS_ _EXEC_ "<br>
>><br>
>> # To use more recent, privately compiled ssh<br>
>> #setenv WIEN_MPIRUN "mpirun -bootstrap-exec $HOME/local/bin/ssh -n<br>
>> _NP_ -machinefile _HOSTS_ _EXEC_ "<br>
>><br>
>> # To use openmpi to launch<br>
>> setenv WIEN_MPIRUN "mpirun -bootstrap-exec $WIENROOT/hopen -n _NP_<br>
>> -machinefile _HOSTS_ _EXEC_ "<br>
>><br>
>> set sleepy = 0.2<br>
>> set delay = 0.1<br>
>> unset DAPL_DBG<br>
>> #Turn on Hydra debug on Quest<br>
>> #setenv I_MPI_HYDRA_DEBUG 1<br>
>> #Turn on MPI DEBUG<br>
>> #setenv I_MPI_DEBUG 1<br>
>> #setenv I_MPI_DEBUG_OUTPUT mpi_debug%h_%r<br>
>> setenv I_MPI_FABRICS_LIST dapl,tcp<br>
>> setenv I_MPI_FALLBACK enable<br>
>><br>
>><br>
>><br>
>><br>
>> On Sat, Jul 27, 2013 at 2:53 PM, Luis Ogando <<a href="mailto:lcodacal@gmail.com">lcodacal@gmail.com</a>> wrote:<br>
>> > Dear Prof. Marks,<br>
>> ><br>
>> > Could you, please, send me a template for the parallel_options file<br>
>> > where<br>
>> > this implementation was done ?<br>
>> > I am sorry for that, but I am really far from being an expert.<br>
>> > All the best,<br>
>> > Luis<br>
>> ><br>
>> ><br>
>> > 2013/7/22 Laurence Marks <<a href="mailto:L-marks@northwestern.edu">L-marks@northwestern.edu</a>><br>
>> >><br>
>> >> A brief followup which may be useful (or not) for others in the future<br>
>> >> with mpi problems. I have been able to work around a mysterious<br>
>> >> impi/ssh bug on NU's supercomputer by replacing ssh by the<br>
>> >> openmpi/mpirun launcher. The hack is gross, but very stable.<br>
>> >><br>
>> >> Step 1:<br>
>> >> 1) Add "--bootstrap-exec=$WIENROOT/hopen" to<br>
>> >> $WIENROOT/parallel_options.<br>
>> >> 2) Create the executable file $WIENROOT/hopen containing<br>
>> >> #!/bin/bash<br>
>> >> a=`echo $@ | sed -e 's/-x -q//'`<br>
>> >> $OPENMPI/bin/mpirun -np 1 --host $a<br>
>> >><br>
>> >> (change $OPENMPI to where it has been compiled).<br>
>> >><br>
>> >> On Thu, Jul 18, 2013 at 10:38 AM, Laurence Marks<br>
>> >> <<a href="mailto:L-marks@northwestern.edu">L-marks@northwestern.edu</a>> wrote:<br>
>> >> > On a cluster I am using I am having a problem with ssh connections as<br>
>> >> > part of impi/mpirun about 0.1-0.2% of the time; what happens is that<br>
>> >> > they fail to launch and become zombie's (ps shows "[ssh] <defunct>").<br>
>> >> > Since fiddling through all the options within mpirun can be hard<br>
>> >> > (particularly for impi which is rather fast), I found (after a<br>
>> >> > comment<br>
>> >> > from someone on the openssh list) a useful hack. I am providing it<br>
>> >> > here as it is a nice way around things, and might be useful to others<br>
>> >> > in the future.<br>
>> >> ><br>
>> >> > The "trick" is to add --bootstrap-exec ~/bin/hssh or similar to the<br>
>> >> > mpirun line in $WIENROOT/parallel_options, then create the executable<br>
>> >> > ~/bin/hssh with something similar to:<br>
>> >> ><br>
>> >> > #!/bin/bash<br>
>> >> > a=`echo $@ | sed -e 's/-q/-v/'`<br>
>> >> > ssh $a<br>
>> >> ><br>
>> >> ><br>
>> >> > The above allows me to turn verbose output on in the ssh command<br>
>> >> > since<br>
>> >> > impi insists on setting -q (quiet). For other cases something similar<br>
>> >> > can be done.<br>
>> >> ><br>
>> >> > --<br>
>> >> > Professor Laurence Marks<br>
>> >> > Department of Materials Science and Engineering<br>
>> >> > Northwestern University<br>
>> >> > <a href="http://www.numis.northwestern.edu" target="_blank">www.numis.northwestern.edu</a> <a href="tel:1-847-491-3996" value="+18474913996">1-847-491-3996</a><br>
>> >> > "Research is to see what everybody else has seen, and to think what<br>
>> >> > nobody else has thought"<br>
>> >> > Albert Szent-Gyorgi<br>
>> >><br>
>> >><br>
>> >><br>
>> >> --<br>
>> >> Professor Laurence Marks<br>
>> >> Department of Materials Science and Engineering<br>
>> >> Northwestern University<br>
>> >> <a href="http://www.numis.northwestern.edu" target="_blank">www.numis.northwestern.edu</a> <a href="tel:1-847-491-3996" value="+18474913996">1-847-491-3996</a><br>
>> >> "Research is to see what everybody else has seen, and to think what<br>
>> >> nobody else has thought"<br>
>> >> Albert Szent-Gyorgi<br>
>> >> _______________________________________________<br>
>> >> Wien mailing list<br>
>> >> <a href="mailto:Wien@zeus.theochem.tuwien.ac.at">Wien@zeus.theochem.tuwien.ac.at</a><br>
>> >> <a href="http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien" target="_blank">http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien</a><br>
>> >> SEARCH the MAILING-LIST at:<br>
>> >> <a href="http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html" target="_blank">http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html</a><br>
>> ><br>
>> ><br>
>><br>
>><br>
>><br>
>> --<br>
>> Professor Laurence Marks<br>
>> Department of Materials Science and Engineering<br>
>> Northwestern University<br>
>> <a href="http://www.numis.northwestern.edu" target="_blank">www.numis.northwestern.edu</a> <a href="tel:1-847-491-3996" value="+18474913996">1-847-491-3996</a><br>
>> "Research is to see what everybody else has seen, and to think what<br>
>> nobody else has thought"<br>
>> Albert Szent-Gyorgi<br>
>> _______________________________________________<br>
>> Wien mailing list<br>
>> <a href="mailto:Wien@zeus.theochem.tuwien.ac.at">Wien@zeus.theochem.tuwien.ac.at</a><br>
>> <a href="http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien" target="_blank">http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien</a><br>
>> SEARCH the MAILING-LIST at:<br>
>> <a href="http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html" target="_blank">http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html</a><br>
><br>
><br>
<br>
<br>
<br>
--<br>
Professor Laurence Marks<br>
Department of Materials Science and Engineering<br>
Northwestern University<br>
<a href="http://www.numis.northwestern.edu" target="_blank">www.numis.northwestern.edu</a> <a href="tel:1-847-491-3996" value="+18474913996">1-847-491-3996</a><br>
"Research is to see what everybody else has seen, and to think what<br>
nobody else has thought"<br>
Albert Szent-Gyorgi<br>
_______________________________________________<br>
Wien mailing list<br>
<a href="mailto:Wien@zeus.theochem.tuwien.ac.at">Wien@zeus.theochem.tuwien.ac.at</a><br>
<a href="http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien" target="_blank">http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien</a><br>
SEARCH the MAILING-LIST at: <a href="http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html" target="_blank">http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html</a><br>
</div></div></blockquote></div><br></div>