[Wien] Pathscale+OpenMPI support

Peter Blaha pblaha at theochem.tuwien.ac.at
Wed Nov 19 09:11:58 CET 2008


Thank you very much for your report !

The description given at your wiki-site is really outstanding and I believe it
could be very helpful for others, even serve as general template for systemadmins.
If you agree, I'd like to add this link at our "faq" pages.

2 small comments:
SRC_tetra: I've already noticed this problem, but I'd expect that one needs only
the modification in the first line (since it is a continuation within a string),
but not the second one, i.e.
                    & " for Atom",i4,"  col=",i3,"  Energy=",2f8.4,/)') &
                      IDOS(1,1),IDOS(1,2),emin,emax
should do the job. It will be included in the next release (I'll also add the pathscale support)
-------------

Parallel runs: Since I do not know how your $machines variable looks like, I cannot judge your
script in detail, however I do not see any tools for the following task:

WIEN2k has two parallel modes, a "k-point" and a "fine-grain mpi" mode.
In many "real" applications one has more than ONE k-point in the case.klist file and
then k-point parallelism (sometimes in addition to mpi-parallelism) is very useful and
efficient.

This is managed by the WIEN2k .machines file in a way, that every single line corresponds
to one mpi-job, but the number of such lines tells WIEN2k how many k-parallel mpi jobs
should be started.
If one requestes 16 cores (#$ -pe mpi 16)

a machines file like

1:node1:4 node2:4 node3:4 node4:4     (or 1:node1 node1 node1 node1 node2 node2 ....)

would run ONE mpi job on 16 nodes (doing one k-point after the other).

However, since mpi-parallelization is often not so perfect (in particular for smaller cases),
and one often has many k-points, a .machines file like

1:node1:4 node2:4
1:node3:4 node4:4

can be more efficient. It will run 2 mpi-jobs on 8 cores each, working on the k-point list in
case.klist in parallel.
Thus you may want to modify the script which produces the .machines file slightly, by adding
one additional environment variable in your job-script, which allows a decomposition into
k-point and mpi-parallelization.
You may find such an example (for loadleveler or sge) at http://www.wien2k.at/reg_user/faq/pbs.html

Best regards

Scott Beardsley schrieb:
> I have WIEN2k 8.3 working with the Pathscale 3.2 compiler and OpenMPI 
> 1.2.6. I'm new to WIEN so I think it is working at least. I did have to 
> make a few changes. First there appears to be a bug in SRC_tetra/tetra.f:
> 

-- 

                                       P.Blaha
--------------------------------------------------------------------------
Peter BLAHA, Inst.f. Materials Chemistry, TU Vienna, A-1060 Vienna
Phone: +43-1-58801-15671             FAX: +43-1-58801-15698
Email: blaha at theochem.tuwien.ac.at    WWW: http://info.tuwien.ac.at/theochem/
--------------------------------------------------------------------------


More information about the Wien mailing list