[Wien] Parallelize Spin Up and Down

L. D. Marks L-marks at northwestern.edu
Thu Sep 29 14:19:38 CEST 2005


Depending upon how many nodes you have available, there are cases (e.g. 
big cells where only a few k-points are needed) where editing the scripts 
to read, e.g., .machines_up & .machines_dn would in fact double the speed.

On Thu, 29 Sep 2005, Peter Blaha wrote:

> If you get your scf cycle speeded up by a factor of 2  such a parallelization
> is of course usefull. (Do you really get a speedup of 2 ? usually 2 parallel
> jobs "interfere", i.e. they run slower although the get "100%". Check the
> start and stop time for the both cases (Maybe your IBM p630 has a fast
> enough memory bus).
>
> However, it helps ONLY
>
> - on a shared memory machine with ENOUGH memory to run 2 lapw1 simultaneously
> - in a spin polarized case with ONLY 1 -kpoint
>
> I'm not aware of many cases where these constrains hold (maybe for single
> atom-supercells).
>
>>  I didn't get a chance to test running spin-up and down for almost one month
>> because I have
>> been switching among different machines. Finally, I can settle down in an
>> IBM p630 with
>> 4 processors. I modify runsp_lapw and run it. It runs without problems. And
>> the up and dn
>> tasks don't interfere each other's speed (i.e., 100% parallel efficiency). I
>> wonder if this means that parallelizing up and dn is useful.
>>  Chiung-Yuan
>>  On 8/18/05, Peter Blaha <pblaha at theochem.tuwien.ac.at> wrote:
>>>
>>> It is not implemented, since I'm not convinced that it is a usefull
>>> option.
>>>
>>> a) Usually even big magnetic systems require more than 1 k-point (maybe
>>> not?)
>>> b) On most dual processor machines I know you will not gain as much as
>>> expected, because the memory bus is slow and the run-time will increase if
>>> two lapw1 are running simultaneously.
>>> Instead, I tend to use a parallel goto-lib which gives a very nice
>>> speedup (at least for dual Xeons).
>>>
>>> Anyway, for a shared memory machine I think it is trivial to
>>> implement. Test the following:
>>>
>>> Change in runsp_lapw the sections lapw1: lapw1c: lapw2: lapw2c:
>>> and add a "background" & character and a "wait" line
>>>
>>> lapw1:
>>> ....
>>> if ( "$so" == "-so" ) then
>>> total_exec lapw1 $it0 -up $para $nohns & <==change
>>> else
>>> total_exec lapw1 $it0 -up $para $nohns $orb & <==change
>>> endif
>>> if ($icycle >= $in1new ) then
>>> write_in1_lapw -dn -ql $qlimit >> $dayfile
>>> if($status == 0 ) cp $file.in1new $file.in1
>>> endif
>>> if ( "$so" == "-so" ) then
>>> total_exec lapw1 $it0 -dn $para $nohns & <==change
>>> else
>>> total_exec lapw1 $it0 -dn $para $nohns $orb & <==change
>>> endif
>>> wait <==add
>>> ...
>>>
>>> lapw2:
>>> if ( "$so" == "-so" ) goto lapw2c
>>> testinput $file.in2 error_input
>>> total_exec lapw2 -up $para & <==change
>>> total_exec lapw2 -dn $para & <==change
>>> wait <==add
>>>
>>> And similar in lapw1c and lapw2c sections.
>>>
>>> In addition I'd expect that you need to comment the testerror statements
>>> of the total_exec aliases.
>>> I think that should do it. Let me know if you are successfull and it is
>>> usefull.
>>>
>>>> I am running a spin-polarized job with only one k-point on a
>>>> 2-processor machine.
>>>> If I don't want to do fine grained parallelization, can I distribute
>>>> spin-up and down tasks into the 2 processors so that they can run
>>>> simultaneously (for lapw1 and
>>>> lapw2)?
>>>>
>>>> Thanks for your attention,
>>>> Chiung-Yuan
>>
>
>
>                                      P.Blaha
> --------------------------------------------------------------------------
> Peter BLAHA, Inst.f. Materials Chemistry, TU Vienna, A-1060 Vienna
> Phone: +43-1-58801-15671             FAX: +43-1-58801-15698
> Email: blaha at theochem.tuwien.ac.at    WWW: http://info.tuwien.ac.at/theochem/
> --------------------------------------------------------------------------
> _______________________________________________
> Wien mailing list
> Wien at zeus.theochem.tuwien.ac.at
> http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
>

Note: if you have an old email address for me, please note that "nwu" has
been changed to "northwestern".
-----------------------------------------------
Laurence Marks
Department of Materials Science and Engineering
MSE Rm 2036 Cook Hall
2220 N Campus Drive
Northwestern University
Evanston, IL 60201, USA
Tel: (847) 491-3996 Fax: (847) 491-7820
email: L-marks at northwestern dot edu
http://www.numis.northwestern.edu
-----------------------------------------------



More information about the Wien mailing list