[Wien] MPI setup on a multicore maschine

Martin Gmitra martin.gmitra at gmail.com
Wed Oct 23 08:34:06 CEST 2013


Dear all,

Thanks for replays. In the machine there is no problem to run k-point
parallelized calculations.

The .machines file for the MPI run has the form:
lapw0:localhost:4
1:localhost:4
2:localhost:4
hf:localhost:4
granularity:1


It is a Debian system with sym link /bin/csh -> /etc/alternatives/csh
dpkg -l csh gives
ii  csh                                      20070713-2
               Shell with C-like syntax, standard login shell on BSD
systems


I tried to replace csh by tcsh in header of lapw1para script:
#!/bin/csh -f by #!/bin/tcsh -f
but there is the same error: @: Expression Syntax.
The tcsh version:
tcsh 6.17.02 (Astron) 2010-05-12 (x86_64-unknown-linux) options
wide,nls,dl,al,kan,rh,nd,color,filec


Mine problem is between the lines 266-277 in lapw1para script:

set i = 1
set sumn = 0
while ($i <= $#weigh)
    @ weigh[$i] *= $klist
    @ weigh[$i] /= $sumw
    @ weigh[$i] /= $granularity
    if ($weigh[$i] == 0 ) then
        @  weigh[$i] ++  # oops, we divided by too big a number
    endif
    @ sumn += $weigh[$i]
    @ i ++
end

Thanks in advance for any suggestion,
Martin Gmitra
Uni Regensburg


On Wed, Oct 23, 2013 at 7:51 AM, Peter Blaha
<pblaha at theochem.tuwien.ac.at> wrote:
> Wrong syntax. You need a "speed" parameter. But of course, the speed should
> be
> the same for shared memory:
>
>
> 1:localhost:4
> 1:localhost:4
>
> Am 22.10.2013 18:42, schrieb Oliver Albertini:
>>
>> If the jobs are all on the same localhost, then they should all be set up
>> with the same speed:
>>
>> lapw0:localhost:4
>> localhost:4
>> localhost:4
>> granularity:1
>>
>>
>> On Tue, Oct 22, 2013 at 2:21 AM, <tran at theochem.tuwien.ac.at
>> <mailto:tran at theochem.tuwien.ac.at>> wrote:
>>
>>     Hi,
>>
>>     I don't know what is the problem, but I can just say that
>>     in .machines there is no line specific for the HF module.
>>     If lapw1 and lapw2 are run in parallel, then this will be the same for
>> hf.
>>
>>     F. Tran
>>
>>
>>     On Tue, 22 Oct 2013, Martin Gmitra wrote:
>>
>>         Dear Wien2k users,
>>
>>         We are running recent version of Wien2k v13.1 in k-point
>>         parallelization. To perform
>>         screened HF we believe that MPI parallelization would speed up our
>> calculations.
>>         The calculations are intended for test reasons to be run on a
>> local
>>         multicore maschine.
>>
>>         Our .machines file looks like:
>>         lapw0:localhost:4
>>         1:localhost:4
>>         2:localhost:4
>>         hf:localhost:4
>>         granularity:1
>>
>>         Invoking x lapw0 -p
>>         starting parallel lapw0 at Tue Oct 22 09:15:48 CEST 2013
>>         -------- .machine0 : 4 processors
>>         LAPW0 END
>>         LAPW0 END
>>         LAPW0 END
>>         LAPW0 END
>>         58.2u 0.6s 0:16.92 348.4% 0+0k 0+37528io 21pf+0w
>>
>>         run lapw0 in parallel while
>>         x lapw1 -up -c -p
>>         starting parallel lapw1 at Tue Oct 22 09:18:30 CEST 2013
>>         ->  starting parallel LAPW1 jobs at Tue Oct 22 09:18:30 CEST 2013
>>         running LAPW1 in parallel mode (using .machines)
>>         Granularity set to 1
>>         Extrafine unset
>>         @: Expression Syntax.
>>         0.0u 0.0s 0:00.10 10.0% 0+0k 0+64io 0pf+0w
>>         error: command   /temp_local/CODES/WIEN2k_v13___mpi/lapw1cpara -up
>> -c
>>
>>         uplapw1.def   failed
>>
>>         The parallel_options file looks like:
>>         setenv TASKSET "no"
>>         setenv USE_REMOTE 0
>>         setenv MPI_REMOTE 0
>>         setenv WIEN_GRANULARITY 1
>>
>>         Before starting the tests we load all libs from intel compiler
>> sets WIENROOT and
>>             export TASKSET="no"
>>             export USE_REMOTE=0
>>             export MPI_REMOTE=0
>>             export WIEN_GRANULARITY=1
>>             export WIEN_MPIRUN="mpirun -np _NP_ -machinefile _HOSTS_
>> _EXEC_"
>>
>>         Do you have any idea while lapw1 does not start?
>>         Many thanks in advance,
>>
>>         Martin Gmitra
>>         Uni Regensburg
>>         _________________________________________________
>>         Wien mailing list
>>         Wien at zeus.theochem.tuwien.ac.__at
>> <mailto:Wien at zeus.theochem.tuwien.ac.at>
>>         http://zeus.theochem.tuwien.__ac.at/mailman/listinfo/wien
>> <http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien>
>>         SEARCH the MAILING-LIST at:
>> http://www.mail-archive.com/__wien@zeus.theochem.tuwien.ac.__at/index.html
>>
>> <http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html>
>>
>>     _________________________________________________
>>     Wien mailing list
>>     Wien at zeus.theochem.tuwien.ac.__at
>> <mailto:Wien at zeus.theochem.tuwien.ac.at>
>>     http://zeus.theochem.tuwien.__ac.at/mailman/listinfo/wien
>> <http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien>
>>     SEARCH the MAILING-LIST at:
>> http://www.mail-archive.com/__wien@zeus.theochem.tuwien.ac.__at/index.html
>>
>>
>> <http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html>
>>
>>
>>
>>
>> _______________________________________________
>> Wien mailing list
>> Wien at zeus.theochem.tuwien.ac.at
>> http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
>> SEARCH the MAILING-LIST at:
>> http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html
>>
>
> --
> -----------------------------------------
> Peter Blaha
> Inst. Materials Chemistry, TU Vienna
> Getreidemarkt 9, A-1060 Vienna, Austria
> Tel: +43-1-5880115671
> Fax: +43-1-5880115698
> email: pblaha at theochem.tuwien.ac.at
> -----------------------------------------
>
> _______________________________________________
> Wien mailing list
> Wien at zeus.theochem.tuwien.ac.at
> http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
> SEARCH the MAILING-LIST at:
> http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


More information about the Wien mailing list