<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div> </div><div> Thank you very much Prof. Lyudmila</div><div>Please see my updated reduced query.</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
I do not use mpi, only simple parallelization over k-points, so I will answer only some of your questions.<br>
> (1) is it ok with mpiifort or mpicc or it should have mpifort or mpicc??<br>
<br>
I do not know and I even do not understand the question.<br></blockquote><div><br></div><div>I compiled Win2k_16 with mpiifort and mpiicc, so my question is whether mpiifort and mpiicc is correct or I should use mpifort and mpicc (look for double "i").</div><div>Hope, this question is now well framed. </div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
> (2) how to know that job is running with mpi parallelization?<br>
<br>
IMHO, the simplest way is from dayfile:<br></blockquote><div><br></div><div>It is good idea to see in case.dayfile. </div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
cycle 1 (Ср. сент. 21 21:59:09 SAMT 2016) (60/99 to go)<br>
> lapw0 -p (21:59:09) starting parallel lapw0 at Ср. сент. 21 21:59:09 SAMT 2016<br>
-------- .machine0 : processors<br>
running lapw0 in single mode <-----***this is no mpi--)<br>
10.221u 0.064s 0:10.35 99.3% 0+0k 0+28016io 0pf+0w<br>
> lapw1 -up -p -c (21:59:19) starting parallel lapw1 at Ср. сент. 21 21:59:19 SAMT 2016<br>
-> starting parallel LAPW1 jobs at Ср. сент. 21 21:59:19 SAMT 2016<br>
running LAPW1 in parallel mode (using .machines) <---***this is k-point parallel.--)<br>
9 number_of_parallel_jobs <-----***this is k-point parallel.--)<br>
localhost(12) 131.805u 1.038s 2:13.24 99.6% 0+0k 0+94072io 0pf+0w<br>
...<br>
localhost(12) 122.034u 1.234s 2:03.67 99.6% 0+0k 0+81472io 0pf+0w<br>
Summary of lapw1para: <------***this is k-point parallel.--)<br></blockquote><div><br></div><div><br></div><div>Thank you very much for detailed answer.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
> the *.err file seems as:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
cp: cannot stat `CuGaO2.scfdmup': No such file or directory >>><br></blockquote>
I don't know, and I am afraid nobody knows without info<br></blockquote><div><br></div><div>This is not a problem, this is set by default dor runsp_c_lapw case by Prof. Peter to save computational time. I got answer from three years old answer by Prof. Peter.</div><div> </div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Mond. Sept 19 15:10:29 SAMT 2016> (x) lapw1 -up -p -c<br>
Mond. Sept 19 15:12:52 SAMT 2016> (x) lapw1 -dn -p -c<br>
Mond. Sept 19 15:15:09 SAMT 2016> (x) lapw2 -up -p -c ...</blockquote><div> </div><div>Okay, because you are running run_lapw -c case. </div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"> (3) I want to know how to change below variable in the job file so<br>
that I can run more effectively mpi run<br>
# the following number / 4 = number of nodes<br>
#$ -pe mpich 32<br>
set mpijob=1 ??<br>
set jobs_per_node=4 ??<br>
#### the definition above requests 32 cores and we have 4 cores /node.<br>
#### We request only k-point parallel, thus mpijob=1<br>
#### the resulting machines names are in $TMPDIR/machines<br>
setenv OMP_NUM_THREADS 1 ???????<br>
</blockquote>
<br>
I don't know.<br></blockquote><div><br></div><div><br></div><div>Okay, may be someone else may look for this.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
(4) The job with 32 core and with 64 core (with "set mpijob=2") taking ~equal time for scf cycles.<br>
</blockquote>
<br>
>From your log file it looks like you do not have any parallelization, so in both cases you have equal time.<br></blockquote><div><br></div><div>Yeah, it may be. But if I use "set mpijob=1" then it runs well for k-point parallelization.</div><div><br></div><div><br></div><div><br></div><div>Thnak you very much</div><div><br></div><div><br></div><div>Sincerely</div><div>Sincerely</div><div><br></div><div>Bhamu</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>
</blockquote></div><br></div></div>