[Wien] Parallelization and PBS on a single computer
Peter Blaha
pblaha at theochem.tuwien.ac.at
Thu Jun 29 14:16:53 CEST 2017
I hope you did read the chapter about parallelization in the UG ??
Then you should know what the 3 cases actually do.
A few remarks:
case 2: This is k-point parallelization and you are running just 1
k-point in each lapw1 case. Now the time for one k-point is very short
(if it is standard TiC it should be below 0.1 sec/k). In this mode you
have to span 20 jobs (which are even delayed by DELAY seconds in
lapw1para_lapw) and this takes MUCH more time then the actual run time
of 20 k-points on a single core.
In essence: you cannot speedup with parallelization to an arbitrary
level, but you have to "think" or eventually test each case individually
until you get a feeling what the optimal number of cores is for your
present input. If the single core time is "nearly zero", parallelization
will not be faster, but in fact it will be SLOWER due to parallelization
overhead and this is what you observe.
PS: In addition: If one k-point on one core takes 10 seconds, if you run
20 such jobs in parallel, each single job will be MUCH slower. These
Intel multicore cpus are "memory-bus limited", i.e. Intel sells you
expensive cpus with 24 cores, but the memory bus can handle only much
less cores efficiently and in fact these many cores are useless for most
(memory bound applications) applications and everything slows down when
you try to use all of them.
case 3) this is mpi-fine grain parallelization. Basically here the same
thing is happening: Splitting up such a small matrix on 20 cores is very
inefficient and will run slower than a non-parallel run. It is mentioned
explicitly in the UG that you should use mpi-parallelization for cells
with more than 50 atoms.
Your tests will be VERY different, when you use a "big" case with a
larger unit cell.
Strategy:
Use larger cases for parallel tests.
Always monitor your tests with the "top" command, so that you can see
what happens.
Try to use "export OMP_NUMB_THREAD 2" (or 4 or 8) and check timings.
This uses 2 or 4 cores in all blas calls (large fraction of lapw1).
I don't know why your mpi-job crashes in lapw2. There must be more info ...
PBS error: obviously your PBS does not transfer the "environment".
When you type: run_lapw, the system finds this command because it is in
your PATH, which was defined in your .bashrc file.
The PBS job does not take over your environment. Probably you can fix
this by including "source ~/.bashrc" in the script.
On 06/29/2017 07:49 AM, Yoji Kobayashi wrote:
> Dear Users,
>
> I have a some questions/problems regarding parallelization and PBS.
> I’m not sure if I’m really running parallel vs. serial, and my PBS
> script isn’t working.
>
> ===
> My system info:
> Intel Xeon CPU E5-2630 v2 @2.6 GHz, 24 CPUS
> Memory: 32GB
> Running Wien2k_13, on Ubuntu 14.04.03
> File system: ext4
> (This is considered a single node with 24 processors?)
> ===
> My first question is, am I really running a parallel calculation in a
> meaningful way?
>
> What I try:
> In w2web, a serial calculation (SCF only) for the TiC example (500 k
> points) takes about 25 sec. to converge.
> I do the same calculation (starting with a new case) but setting
> parallelization in w2web, with slightly different .machine files for
> each case:
>
> Case 1:
> 1:localhost
>
> Case 2 (i.e. 20 lines of below):
> 1:localhost
> 1:localhost
> …
> 1:localhost
> 1:localhost
>
> Case 3
> 1:localhost:20
>
> (no lines referring to granularity, etc for now)
>
> What I get:
> Case 1 computes in about 54 sec;
> Case 2 computes in 1min23 sec.;
> Case 3 gives an error in runninglapw2, see thedayfile below:
> -----
> Calculating YK-016-TiC in /home/milkbar/Yoji/YK-016-TiC
>
> on milkbar-computer with PID 18077
> using WIEN2k_13.1 (Release 17/6/2013) in /home/milkbar/WIEN2k_13
>
>
> start (2017年 6月 29日 木曜日 14:23:39 JST) with lapw0 (40/99 to go)
>
> cycle 1 (2017年 6月 29日 木曜日 14:23:39 JST) (40/99 to go)
>
>> lapw0 -p (14:23:39) starting parallel lapw0 at 2017年 6月 29日 木曜日 14:23:39 JST
> -------- .machine0 : processors
> running lapw0 in single mode
> 1.7u 0.0s 0:01.84 98.3% 0+0k 16+440io 0pf+0w
>> lapw1 -p (14:23:41) starting parallel lapw1 at 2017年 6月 29日 木曜日 14:23:41 JST
> -> starting parallel LAPW1 jobs at 2017年 6月 29日 木曜日 14:23:41 JST
> running LAPW1 in parallel mode (using .machines)
> 1 number_of_parallel_jobs
> localhost localhost localhost localhost localhost localhost localhost localhost localhost localhost localhost localhost localhost localhost localhost localhost localhost localhost localhost localhost(20) 20 total processes failed to start
> 0.0u 0.0s 0:00.20 10.0% 0+0k 8080+8io 23pf+0w
> Summary of lapw1para:
> localhost k=0 user=0 wallclock=0
> 0.0u 0.0s 0:02.10 0.9% 0+0k 8208+216io 24pf+0w
>> lapw2 -p (14:23:43) running LAPW2 in parallel mode
> ** LAPW2 crashed!
> 0.0u 0.0s 0:00.07 28.5% 0+0k 32+104io 0pf+0w
> error: command /home/milkbar/WIEN2k_13/lapw2para lapw2.def failed
>
>> stop error
>
> ------
>
> Is my “serial” calculation actually processed over 24 CPUs already, so this is why it is faster than Case 2? Or am I doing something wrong? Why does Case 3 crash?
>
>
> ====
> My second question is about PBS.
> I installed torque PBS, and created a queue:
>
> # create default queue
> qmgr -c 'create queue batch'
> qmgr -c 'set queue batch queue_type = execution'
> qmgr -c 'set queue batch started = true'
> qmgr -c 'set queue batch enabled = true'
> qmgr -c 'set queue batch resources_default.walltime = 1:00:00'
> qmgr -c 'set queue batch resources_default.nodes = 1'
> qmgr -c 'set server default_queue = batch’
>
> and followed other instructions on
> https://jabriffa.wordpress.com/2015/02/11/installing-torquepbs-job-scheduler-on-ubuntu-14-04-lts/
>
> The PBS system seems to work since I can submit very simple scripts and
> see them on qstat. My problem is that when I try to submit a serial
> wien2k job via PBS, it gives me an error (ultimately of course I’d like
> to submit them as parallel, but because of the ambiguity above I’ve kept
> it to serial) . Here's the PBS script and error message:
>
> #!/bin/tcsh
> ##PBS -A your_allocation
> # specify the allocation. Change it to your allocation
> #PBS -q batch
> #PBS -l nodes=1:ppn=20
> #PBS -l walltime=1:00:00
> #PBS -o wien2k_output
> #PBS -j oe
> #PBS -N wien2k_test
> cd $PBS_O_WORKDIR
> echo hello
> run_lapw -i 40 -ec .0001 -I
>
> Error message (contents of wien2k_output):
> hello
> /var/spool/torque/mom_priv/jobs/44.milkbar-computer.kage.SC: line 12:
> run_lapw: command not found
>
> The job is listed as complete in qstat, and the “hello” is written into
> thewien2k_output file. Changing the cd $PBS_O_WORKDIR to the path for
> the current case hasn’t changed anything. I can run run_lapw from the
> command line fine, though. Also, what do I write for allocation? (I
> commented it out, as I see other PBS scripts don’t always have this.)
>
> I’ve also tried the parallel case, with the following PBS script. I set
> up the .structure file and do the initialization with w2web. I leave the
> “parallel calculation” option unchecked when setting up the case file in
> w2web.
>
> #!/bin/tcsh
> ##PBS -A your_allocation
> #PBS -q batch
> #PBS -l nodes=1:ppn=20
> #PBS -l walltime=1:00:00
> #
> #PBS -o wien2k_output
> #PBS -j oe
> #PBS -N wien2k_test
> cd $PBS_O_WORKDIR
> #
> #cat $PBS_NODEFILE |cut -c1-6 >.machines_currentdd
> #set aa=`wc .machines_current`
> #echo '#' > .machines
> #
> ##example for k-point parallel lapw1/2
> set i=1
> while ($i <= $aa[1] )
> echo -n '1:' >>.machines
> head -$i .machines_current |tail -1 >> .machines
> @ i ++
> end
> echo 'granularity:1' >>.machines
> echo 'extrafine:1' >>.machines
> #
> #define here your Wien2k command
> run_lapw -p -i 40 -ec .0001 -I
>
> When I submit this job via qsub, again the job is immediately listed as
> complete in qstat, and I get the following error message in wien2k_output:
>
> milkbar at milkbar-computer:~/Yoji/YK-017-TiC$ cat wien2k_output
> /var/spool/torque/mom_priv/jobs/45.milkbar-computer.kage.SC: line 28:
> syntax error: unexpected end of file
>
> No .machines file has been created in the case folder.
> How can I successfully submit serial/parallel PBS jobs? Thanks in
> advance for your help.
>
> Yoji Kobayashi
>
> ==========================================================
> Yoji Kobayashi, Junior Assoc. Prof. yojik at scl.kyoto-u.ac.jp
> <mailto:yojik at scl.kyoto-u.ac.jp>
> http://www.scl.kyoto-u.ac.jp/~yojik/index.htm
>
> Kageyama Group, Dept. of Energy and Hydrocarbon Chemistry
> Graduate School of Engineering, Kyoto University
> Nishikyo-ku, Kyoto 615-8510, Japan
>
> Tel.: +81-75-383-2509 Fax: +81-75-383-2510
> http://www.ehcc.kyoto-u.ac.jp/eh10/kageyama.html
> ==========================================================
>
>
>
> _______________________________________________
> Wien mailing list
> Wien at zeus.theochem.tuwien.ac.at
> http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
> SEARCH the MAILING-LIST at: http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html
>
--
P.Blaha
--------------------------------------------------------------------------
Peter BLAHA, Inst.f. Materials Chemistry, TU Vienna, A-1060 Vienna
Phone: +43-1-58801-165300 FAX: +43-1-58801-165982
Email: blaha at theochem.tuwien.ac.at WIEN2k: http://www.wien2k.at
WWW: http://www.imc.tuwien.ac.at/TC_Blaha
--------------------------------------------------------------------------
More information about the Wien
mailing list