[Wien] Parallel calculation in more than 2 nodes
Laurence Marks
laurence.marks at gmail.com
Tue Jul 21 14:46:42 CEST 2020
i.e. paste the result of "cat $WIENROOT/parallel_options" into an email.
_____
Professor Laurence Marks
"Research is to see what everybody else has seen, and to think what nobody
else has thought", Albert Szent-Gyorgi
www.numis.northwestern.edu
On Tue, Jul 21, 2020, 07:44 Laurence Marks <laurence.marks at gmail.com> wrote:
> What are you using in parallel_options? The statement:
> "parallel option file: # setenv WIEN_MPIRUN "srun -K1 _EXEC_"
> Because of compatible issues, we don't use srun by commented the
> WIEN_MPIRUN line in parallel option file and use the mpirun directly." is
> ambiguous.
>
> What mpi?
> _____
> Professor Laurence Marks
> "Research is to see what everybody else has seen, and to think what nobody
> else has thought", Albert Szent-Gyorgi
> www.numis.northwestern.edu
>
> On Tue, Jul 21, 2020, 05:48 MA Weiliang <weiliang.MA at etu.univ-amu.fr>
> wrote:
>
>> Dear WIEN2K users,
>>
>> The cluster we used is a memory shared system with 16 cpus per node. The
>> calculation distributed in 2 nodes with 32 cpus. But actually all the mpi
>> processes were running in the first node according to the attached top
>> ouput. There were not processes in the second nodes. As you can see, the
>> usage of cpu is around 50%. It seemes that the calculation didn't
>> distribute in 2 nodes, but only splitted the fisrt node (16 cpus) into 32
>> prcesses with half computing power.
>>
>> Do you have any ideas for this problem? The .machines, wien2k info,
>> dayfile and job output are attached below. Thank you!
>>
>> Best,
>> Weiliang
>>
>>
>> #========================================#
>> # output of top
>> #----------------------------------------#
>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
>> 43504 mc 20 0 614m 262m 27m R 50.2 0.3 21:45.54 lapw1c_mpi
>> 43507 mc
>>
>> granularity:1
>> extrafine:1
>>
>>
>> #========================================#
>> # wien2k info
>> #----------------------------------------#
>> wien2k version: 18.2
>> complier: ifort, icc, mpiifort (intel 2017 compliers)
>> parallel option file: # setenv WIEN_MPIRUN "srun -K1 _EXEC_"
>> Because of compatible issues, we don't use srun by commented the
>> WIEN_MPIRUN line in parallel option file and use the mpirun directly.
>>
>>
>> _______________________________________________
>> Wien mailing list
>> Wien at zeus.theochem.tuwien.ac.at
>>
>> https://urldefense.com/v3/__http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien__;!!Dq0X2DkFhyF93HkjWTBQKhk!Gk4SRyFbT0Rr2V8RvIKpCWwNVxEaIwZmJwfybYs-iLIsOTo1L_GQr62ya-ECZy_n7wJwFg$
>> SEARCH the MAILING-LIST at:
>> https://urldefense.com/v3/__http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html__;!!Dq0X2DkFhyF93HkjWTBQKhk!Gk4SRyFbT0Rr2V8RvIKpCWwNVxEaIwZmJwfybYs-iLIsOTo1L_GQr62ya-ECZy_rM73t9Q$
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://zeus.theochem.tuwien.ac.at/pipermail/wien/attachments/20200721/016c3819/attachment.html>
More information about the Wien
mailing list