[Wien] mpiexec_mpt

Luis Ogando lcodacal at gmail.com
Thu Jul 11 16:30:25 CEST 2013


Dear Oleg Rubel,

   I agree with you ! This is the reason I asked for hints from someone
that uses WIEN with mpiexec_mpt (to save efforts and time).
   Thank you again !
   All the best,
                 Luis



2013/7/11 Oleg Rubel <orubel at lakeheadu.ca>

> Dear Luis,
>
> It looks like the problem is not in Wien2k. I would recommend to make sure
> that you can get a list of host names correctly before proceeding with
> wien. There are slight difference between various mpi implementation in a
> way of passing the host name list.
>
> Oleg
> On 2013-07-11 9:52 AM, "Luis Ogando" <lcodacal at gmail.com> wrote:
>
>> Dear Prof. Marks and Rubel,
>>
>>    Many thanks for your kind responses.
>>    I am forwarding your messages to the computation center. As soon as I
>> have any reply, I will contact you.
>>
>>    I know that they have other wrappers (Intel MPI, for example), but
>> they argue that mpiexec_mpt is the optimized option.
>>    I really doubt that this option will succeed, because I am getting the
>> following error message in case.dayfile (bold)
>>
>>
>> ================================================================================
>> Calculating InPwurt15InPzb3 in
>> /home/ice/proj/proj546/ogando/Wien/Calculos/InP/InPwurtInPzb/15camadasWZ+3ZB/InPwurt15InPzb3
>> on r1i0n8 with PID 6433
>> using WIEN2k_12.1 (Release 22/7/2012) in
>> /home/ice/proj/proj546/ogando/RICARDO2/wien/src
>>
>>
>>     start (Wed Jul 10 13:29:42 BRT 2013) with lapw0 (150/99 to go)
>>
>>     cycle 1 (Wed Jul 10 13:29:42 BRT 2013) (150/99 to go)
>>
>> >   lapw0 -grr -p (13:29:42) starting parallel lapw0 at Wed Jul 10
>> 13:29:42 BRT 2013
>> -------- .machine0 : 12 processors
>> *mpiexec_mpt error: -machinefile option not supported.*
>> 0.016u 0.008s 0:00.40 2.5% 0+0k 0+176io 0pf+0w
>> error: command
>> /home/ice/proj/proj546/ogando/RICARDO2/wien/src/lapw0para -c lapw0.def
>> failed
>>
>> >   stop error
>>
>> ================================================================================
>>
>>    Related to -sgi option, I am using -pbs option because PBS is the
>> queueing system. As I said, I works well for parallel execution that uses
>> just one node.
>>    Many thanks again,
>>                  Luis
>>
>>
>>
>> 2013/7/11 Oleg Rubel <orubel at lakeheadu.ca>
>>
>>> Dear Luis,
>>>
>>> Can you run other MPI codes under SGI scheduler on your cluster? In any
>>> case, I would suggest first to try the simplest check
>>>
>>> mpiexec -n $NSLOTS hostname
>>>
>>> this is what we use for Wien2k
>>>
>>> mpiexec -machinefile _HOSTS_ -n _NP_ _EXEC_
>>>
>>> the next line is also useful to ensure a proper CPU load
>>>
>>> setenv MV2_ENABLE_AFFINITY 0
>>>
>>>
>>> I hope this will help
>>> Oleg
>>>
>>>
>>> On 13-07-11 8:32 AM, Luis Ogando wrote:
>>>
>>>> Dear WIEN2k community,
>>>>
>>>>     I am trying to use WIEN2k 12.1 in a SGI cluster. When I perform
>>>> parallel calculations using  just "one" node, I can use mpirun and
>>>> everything goes fine (many thanks to Prof. Marks and his SRC_mpiutil
>>>> directory).
>>>>     On the other hand, when I want to use more than one node, I have to
>>>> use mpiexec_mpt and the calculation fails. I also tried the mpirun for
>>>> more than one node, but this is not the proper way in a SGI system and I
>>>> did not succeed.
>>>>     Well, I would like to know if anyone has experience in using WIEN2k
>>>> with mpiexec_mpt and could give me any hint.
>>>>      I can give more information. This is only an initial ask for help.
>>>>     All the best,
>>>>                        Luis
>>>>
>>>>
>>>>
>>>> ______________________________**_________________
>>>> Wien mailing list
>>>> Wien at zeus.theochem.tuwien.ac.**at <Wien at zeus.theochem.tuwien.ac.at>
>>>> http://zeus.theochem.tuwien.**ac.at/mailman/listinfo/wien<http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien>
>>>> SEARCH the MAILING-LIST at:  http://www.mail-archive.com/**
>>>> wien at zeus.theochem.tuwien.ac.**at/index.html<http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html>
>>>>
>>>>  ______________________________**_________________
>>> Wien mailing list
>>> Wien at zeus.theochem.tuwien.ac.**at <Wien at zeus.theochem.tuwien.ac.at>
>>> http://zeus.theochem.tuwien.**ac.at/mailman/listinfo/wien<http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien>
>>> SEARCH the MAILING-LIST at:  http://www.mail-archive.com/**
>>> wien at zeus.theochem.tuwien.ac.**at/index.html<http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html>
>>>
>>
>>
>> _______________________________________________
>> Wien mailing list
>> Wien at zeus.theochem.tuwien.ac.at
>> http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
>> SEARCH the MAILING-LIST at:
>> http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html
>>
>>
> _______________________________________________
> Wien mailing list
> Wien at zeus.theochem.tuwien.ac.at
> http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
> SEARCH the MAILING-LIST at:
> http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://zeus.theochem.tuwien.ac.at/pipermail/wien/attachments/20130711/5bb22a4d/attachment.htm>


More information about the Wien mailing list