<div dir="ltr"><div class="gmail_default" style="font-family:verdana,sans-serif;color:#000000">Dear Lyudmila,</div><div class="gmail_default" style="font-family:verdana,sans-serif;color:#000000"><br></div><div class="gmail_default" style="font-family:verdana,sans-serif;color:#000000">This is almost certainly an OS problem, and there is little that you can do except find a better supercomputer!</div><div class="gmail_default" style="font-family:verdana,sans-serif;color:#000000"><br></div><div class="gmail_default" style="font-family:verdana,sans-serif;color:#000000">It could be an NFS problem, and setting SCRATCH to a local file on each computer node might then help. Alternatively, while you are supposed to have all of any given node, they might not be running that way -- a lot depends upon how srun is configured.</div><div class="gmail_default" style="font-family:verdana,sans-serif;color:#000000"><br></div><div class="gmail_default" style="font-family:verdana,sans-serif;color:#000000">One thing to test is internal mpi (same node) versus cross-node mpi. The first should always be fast.</div><div class="gmail_default" style="font-family:verdana,sans-serif;color:#000000"><br></div><div class="gmail_default" style="font-family:verdana,sans-serif;color:#000000">And....buy a sys admin a beer (vodka) and have him/her explain how they have things configured in more detail.</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Oct 6, 2020 at 10:14 AM Lyudmila Dobysheva <<a href="mailto:lyuka17@mail.ru">lyuka17@mail.ru</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Dear all,<br>
<br>
I have started working at supercomputer and sometimes I see some delays <br>
during execution. They occur randomly, more frequently during lapw0, but <br>
in other programs also (extra 7-20 min). Administrators say that there <br>
can be sometimes problems with the net's speed.<br>
But I cannot understand: now I take only one node with 16 processors. <br>
I'd say that if I send the task to one node the problems of the net <br>
between computers should not affect till the whole task ends.<br>
Maybe I have wrongly set scratch variable?<br>
In .bashrc:<br>
export SCRATCH=./<br>
<br>
During execution I see how the cycle is fulfilled, that is, after lapw0 <br>
I see its output files. This means that after lapw0 the calculating node <br>
sends to the governing computer the files, and, maybe, here it waits? Is <br>
this behavior correct? I expected that I should not see the intermediate <br>
stages, till the work ends.<br>
And the very programs lapw0, lapw1, lapw2, lcore, mixer - maybe they are <br>
reloaded to the calculating computer every cycle anew?<br>
<br>
Best regards<br>
Lyudmila Dobysheva<br>
<br>
some details WIEN2k_19.2<br>
ifort 64 19.1.0.166<br>
---------------<br>
parallel_options:<br>
setenv TASKSET "srun "<br>
if ( ! $?USE_REMOTE ) setenv USE_REMOTE 1<br>
if ( ! $?MPI_REMOTE ) setenv MPI_REMOTE 0<br>
setenv WIEN_GRANULARITY 1<br>
setenv DELAY 0.1<br>
setenv SLEEPY 1<br>
if ( ! $?WIEN_MPIRUN) setenv WIEN_MPIRUN "srun -K -N_nodes_ -n_NP_ <br>
-r_offset_ _PINNING_ _EXEC_"<br>
if ( ! $?CORES_PER_NODE) setenv CORES_PER_NODE 16<br>
--------------<br>
WIEN2k_OPTIONS:<br>
current:FOPT:-O -FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML <br>
-traceback -assume buffered_io -I$(<br>
MKLROOT)/include<br>
current:FPOPT:-O -FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML <br>
-traceback -assume buffered_io -I$<br>
(MKLROOT)/include<br>
current:OMP_SWITCH:-qopenmp<br>
current:LDFLAGS:$(FOPT) -L$(MKLROOT)/lib/$(MKL_TARGET_ARCH) -lpthread <br>
-lm -ldl -liomp5<br>
current:DPARALLEL:'-DParallel'<br>
current:R_LIBS:-lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core<br>
current:FFTWROOT:/home/uffff/.local/<br>
current:FFTW_VERSION:FFTW3<br>
current:FFTW_LIB:lib<br>
current:FFTW_LIBNAME:fftw3<br>
current:LIBXCROOT:<br>
current:LIBXC_FORTRAN:<br>
current:LIBXC_LIBNAME:<br>
current:LIBXC_LIBDNAME:<br>
current:SCALAPACKROOT:$(MKLROOT)/lib/<br>
current:SCALAPACK_LIBNAME:mkl_scalapack_lp64<br>
current:BLACSROOT:$(MKLROOT)/lib/<br>
current:BLACS_LIBNAME:mkl_blacs_intelmpi_lp64<br>
current:ELPAROOT:<br>
current:ELPA_VERSION:<br>
current:ELPA_LIB:<br>
current:ELPA_LIBNAME:<br>
current:MPIRUN:srun -K -N_nodes_ -n_NP_ -r_offset_ _PINNING_ _EXEC_<br>
current:CORES_PER_NODE:16<br>
current:MKL_TARGET_ARCH:intel64<br>
<br>
------------------<br>
<a href="https://urldefense.com/v3/__http://ftiudm.ru/content/view/25/103/lang,english/__;!!Dq0X2DkFhyF93HkjWTBQKhk!DR3lyfE3O6uY7hwNXSGhDD_cUJeZJ30DGB2hyhheIjmw6g37W7S_HNcCObMl3AHsatYthw$" rel="noreferrer" target="_blank">https://urldefense.com/v3/__http://ftiudm.ru/content/view/25/103/lang,english/__;!!Dq0X2DkFhyF93HkjWTBQKhk!DR3lyfE3O6uY7hwNXSGhDD_cUJeZJ30DGB2hyhheIjmw6g37W7S_HNcCObMl3AHsatYthw$</a> <br>
Physics-Techn.Institute,<br>
Udmurt Federal Research Center, Ural Br. of Rus.Ac.Sci.<br>
426000 Izhevsk Kirov str. 132<br>
Russia<br>
---<br>
Tel. +7 (34I2)43-24-59 (office), +7 (9I2)OI9-795O (home)<br>
Skype: lyuka18 (office), lyuka17 (home)<br>
E-mail: <a href="mailto:lyuka17@mail.ru" target="_blank">lyuka17@mail.ru</a> (office), <a href="mailto:lyuka17@gmail.com" target="_blank">lyuka17@gmail.com</a> (home)<br>
_______________________________________________<br>
Wien mailing list<br>
<a href="mailto:Wien@zeus.theochem.tuwien.ac.at" target="_blank">Wien@zeus.theochem.tuwien.ac.at</a><br>
<a href="https://urldefense.com/v3/__http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien__;!!Dq0X2DkFhyF93HkjWTBQKhk!DR3lyfE3O6uY7hwNXSGhDD_cUJeZJ30DGB2hyhheIjmw6g37W7S_HNcCObMl3AFZ-tY25Q$" rel="noreferrer" target="_blank">https://urldefense.com/v3/__http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien__;!!Dq0X2DkFhyF93HkjWTBQKhk!DR3lyfE3O6uY7hwNXSGhDD_cUJeZJ30DGB2hyhheIjmw6g37W7S_HNcCObMl3AFZ-tY25Q$</a> <br>
SEARCH the MAILING-LIST at: <a href="https://urldefense.com/v3/__http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html__;!!Dq0X2DkFhyF93HkjWTBQKhk!DR3lyfE3O6uY7hwNXSGhDD_cUJeZJ30DGB2hyhheIjmw6g37W7S_HNcCObMl3AE759vujg$" rel="noreferrer" target="_blank">https://urldefense.com/v3/__http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html__;!!Dq0X2DkFhyF93HkjWTBQKhk!DR3lyfE3O6uY7hwNXSGhDD_cUJeZJ30DGB2hyhheIjmw6g37W7S_HNcCObMl3AE759vujg$</a> <br>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr">Professor Laurence Marks<br>Department of Materials Science and Engineering<br>Northwestern University<br><a href="http://www.numis.northwestern.edu/" target="_blank">www.numis.northwestern.edu</a><div>Corrosion in 4D: <a href="http://www.numis.northwestern.edu/MURI" target="_blank">www.numis.northwestern.edu/MURI</a><br>Co-Editor, Acta Cryst A<br>"Research is to see what everybody else has seen, and to think what nobody else has thought"<br>Albert Szent-Gyorgi</div></div></div>