[Wien] -machinefile _HOSTS_

Luis Ogando lcodacal at gmail.com
Mon Apr 8 20:29:33 CEST 2013


Dear Prof. Blaha,

   Many thanks for your comments !
   I have suspected that without the .machines file I would have problems
to mix k-points and mpi parallelization.
   The reason to use mpiexec_mpt instead of mpirun I will have to ask to
the guys from the computing center (I can not imagine a reasonable one).
   Anyway, I will also ask them about the -host and -hostfile possibilities.
   All the best,
                Luis



2013/4/8 Peter Blaha <pblaha at theochem.tuwien.ac.at>

> If you do not mix k-parallel and mpi-parallel runs, it is probably save.
> (i.e. for VERY big cases)
>
> The question is: why are you using   mpiexec_mpt  ??
>
> Usually it is better to use the more general mpirun (or the less general
> mpiexec) instead
> of the "special mpiexec_mpt comand.
>
> Besides that, for some mpi-versions/commands it might be   -host or
> -hostfile instead of -machinefile
>
> checkout, which   mpiXXX commands do you have, and with man/help pages,
> what are the available options.
>
> To my understanding  PBS_NODEFILE  is a variable of some queuing-systems
> (like PBS, but maybe some others
> too. It is not really "mpi-specific".
>
> PS: In "mixed" parallelizations you want to request eg. 256 cores, but you
> may have 4 k-points, thus doing
> 4 k-parallel lapw1-jobs on 64 cores each. In such a situation you need to
> tell mpi, on which nodes it should run
> a particular process. And I'm pretty sure, that EVERY (meaningful)
> installation can do this.
>
> Am 08.04.2013 19:43, schrieb Luis Ogando:
>
>> Dear all,
>>
>>     Dr. Gavin Abo pointed me out that I should have mentioned that I am
>> using mpiexec_mpt instead of mpiexec, so, in my case, the mpi execution is
>> controlled by the
>> PBS_NODEFILE variable and not by a machines file (thanks Dr. Abo for
>> this).
>>     Anyway, I would like to know if it is safe to run MPI Wien2k with
>> mpiexec_mpt.
>>     All the best,
>>                     Luis
>>
>> PS: I have no problem on generating the .machines file "on the fly"
>> (queuing system). Despite the fact that it will not be used by mpiexec_mpt,
>> I know that it is required
>> by Wien2k.
>>
>>
>>
>>
>> ---------- Forwarded message ----------
>> From: *Gavin Abo* <gsabo at crimson.ua.edu <mailto:gsabo at crimson.ua.edu>>
>> Date: 2013/4/8
>> Subject: Re: [Wien] -machinefile _HOSTS_
>> To: Luis Ogando <lcodacal at gmail.com <mailto:lcodacal at gmail.com>>
>>
>>
>> Dear Luis,
>>
>> You probably should have mentioned that you are using 'mpiexec_mpt' not
>> 'mpiexec'.
>>
>> -machinefile is an option for mpiexec [http://linux.die.net/man/1/**
>> mpiexec <http://linux.die.net/man/1/mpiexec>], but it doesn't seem to be
>> a option for mpiexec_mpt
>> [http://techpubs.sgi.com/**library/tpl/cgi-bin/getdoc.**
>> cgi?coll=linux&db=man&fname=/**usr/share/catman/man1/mpiexec_**mpt.1.html<http://techpubs.sgi.com/library/tpl/cgi-bin/getdoc.cgi?coll=linux&db=man&fname=/usr/share/catman/man1/mpiexec_mpt.1.html>
>> ].
>>
>> mpiexec_mpt seems to use the PBS_NODEFILE variable instead of
>> -machinefile [http://www.arl.hpc.mil/docs/**pbsUserGuide.html<http://www.arl.hpc.mil/docs/pbsUserGuide.html>].
>>  So your parallel_options are probably fine as
>> long as the PBS_NODEFILE variable is set automatically by your system or
>> you.
>>
>> Kind Regards,
>>
>> Gavin
>>
>>
>> On 4/8/2013 10:18 AM, Luis Ogando wrote:
>>
>>> Hi Gavin,
>>>
>>>    Thank you for your answer.
>>>    Actually, I generated the .machines files for the queuing system
>>> without problems. The issue is that the " setenv WIEN_MPIRUN " in the
>>> "parallel_options" file has to
>>> be " setenv WIEN_MPIRUN "mpiexec_mpt -np _NP_  _EXEC_" ", without the
>>> "-machinefile" option because it is not defined in the system.
>>>    All the best,
>>>            Luis
>>>
>>>
>>>
>>>
>>> 2013/4/8 Gavin Abo <gsabo at crimson.ua.edu <mailto:gsabo at crimson.ua.edu>>
>>>
>>>
>>>     Dear Luis,
>>>
>>>     I'm sending this email off the mailing list as Prof. Marks or Blaha
>>> might respond with a better answer.
>>>
>>>     I think the answer is yes, it can impact the Wien2k performance.
>>> This is because the machinefile variable contains the list of hostnames for
>>> 'multiple' nodes.
>>>      Without it, the calculation will likely run on only 'one' node.
>>>
>>>     Even if you don't have admin privileges, you can likely still define
>>> the machinefile variable as a user by creating a .machines file in your
>>> case directory.  However,
>>>     the creation of the .machines file may depend on whether or not you
>>> are required to use a queuing system.  If you are not required to use a
>>> queuing system, you should
>>>     be able to copy the .machines file in SRC_templates to your case
>>> directory and then edit it a text editor (note: you might not see the
>>> .machines file unless you do a
>>>     directory listing that includes hidden files).  The .machines file
>>> is should be described in the Wien2k userguide or you can search the
>>> internet for some examples
>>>     [https://www.xsede.org/**documents/10157/305826/ecss_**
>>> hliu_051012.pdf<https://www.xsede.org/documents/10157/305826/ecss_hliu_051012.pdf>].
>>> If you are required to use a queuing system, you likely need to setup a
>>> script that will create
>>>     the .machines file as described at the link:
>>>
>>>     http://www.wien2k.at/reg_user/**faq/pbs.html<http://www.wien2k.at/reg_user/faq/pbs.html>
>>>
>>>     If you have problems creating the script, your administrator(s) or
>>> support person(s) should know the most about your computer system, so they
>>> can likely help you
>>>     create a script that will work on your system.
>>>
>>>     Kind Regards,
>>>
>>>     Gavin
>>>
>>>
>>>     On 4/8/2013 6:19 AM, Luis Ogando wrote:
>>>
>>>         Dear Prof. Marks, Blaha and Wien2k community,
>>>
>>>            I want to do calculations in a computer where the machinefile
>>> variable is not defined for the mpiexec (I am not the administrator). I
>>> would like to know if
>>>         this will have some impact on the MPI Wien2k performance.
>>>            Thanks in advance,
>>>                                         Luis
>>>
>>>
>>>
>>>
>>
>>
>>
>> ______________________________**_________________
>> Wien mailing list
>> Wien at zeus.theochem.tuwien.ac.**at <Wien at zeus.theochem.tuwien.ac.at>
>> http://zeus.theochem.tuwien.**ac.at/mailman/listinfo/wien<http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien>
>> SEARCH the MAILING-LIST at:  http://www.mail-archive.com/**
>> wien at zeus.theochem.tuwien.ac.**at/index.html<http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html>
>>
>>
> --
> ------------------------------**-----------
> Peter Blaha
> Inst. Materials Chemistry, TU Vienna
> Getreidemarkt 9, A-1060 Vienna, Austria
> Tel: +43-1-5880115671
> Fax: +43-1-5880115698
> email: pblaha at theochem.tuwien.ac.at
> ------------------------------**-----------
> ______________________________**_________________
> Wien mailing list
> Wien at zeus.theochem.tuwien.ac.**at <Wien at zeus.theochem.tuwien.ac.at>
> http://zeus.theochem.tuwien.**ac.at/mailman/listinfo/wien<http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien>
> SEARCH the MAILING-LIST at:  http://www.mail-archive.com/**
> wien at zeus.theochem.tuwien.ac.**at/index.html<http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://zeus.theochem.tuwien.ac.at/pipermail/wien/attachments/20130408/9f7ce63a/attachment.htm>


More information about the Wien mailing list