[Wien] wien and hyperthreading
Peter Blaha
pblaha at zeus.theochem.tuwien.ac.at
Tue Mar 15 19:11:15 CET 2005
Just change the FOR to TOT and lapw2 will run in 10-20 minutes, saving you
more than an hour per iteration.
If you still need more saving you may play with 381 points. Most likely, I
would not touch GMAX. Check case.output0 for the timing. In which parts is
it going ? This tells you whether it is inside the spheres (radial mesh)
or due to the Fourier series.
> Yes everthing is running the latest version. I have been running NPT=781 on
> the system (132 Ge atoms plus 2 As and 4 Li atoms). lapw1 is running over 6
> computers. GMAX is at default. Case.in2c is running FOR (I'm converging using
> force, maybe I should change this then). The calculations are for determining
> core-levels. I was trying to gett mpi working with the new intel mpi with
> mkl7.2 cluster and ifc 7. So would a better procedure be to run with 381, TOT
> and charge convergence, reach convergence then change to a fine radial mesh
> and change to force convergence for the last bit. What GMAX would you
> recommend for the "rough calculations".
>
> Thanks for your help.
>
> Mick
> ----- Original Message ----- From: "Peter Blaha"
> <pblaha at zeus.theochem.tuwien.ac.at>
> To: <wien at zeus.theochem.tuwien.ac.at>
> Sent: Tuesday, March 15, 2005 11:40 AM
> Subject: Re: [Wien] wien and hyperthreading
>
>
> > The timing is very unusual.
> > lapw1 should take longer than lapw0 (maybe you use k-point parallel lapw1,
> > this would explain it eventually).
> > lapw1 should also take longer than lapw2 ! Are you using run_lapw -I ?
> > i.e. are you sure that your switch in case.in2 is TOT and not FOR. Only
> > with FOR lapw2 might take longer than lapw1, but this should be used only
> > in the last cycle.
> >
> > I do have a working lapw0_mpi, but most likely this is not helpful to you.
> > It was compiled with the pgi compiler and uses mpich (also installed via
> > pgi-4.0 compiler). But this depends all on the mpi-Installation and this
> > was done by the computing center.
> >
> > lapw0 can be speeded up by reduction of the radial mesh (eg. use only 381
> > points) and also GMAX (in2) determines the cputime for this part. Of
> > course, when doing GGA you loose some accuracy but for some timeconsuming
> > structural relaxations this should be ok. Use clminter to interpolate
> > to a crude (and than back to a fine radial mesh).
> > (I hope you are using the latest lapw0 version ? L.Marks has speeded up
> > lapw0 by quite some amount.....
> >
> > > Was just getting desperate to squeeze more speed out of the system. As no
> > > one
> > > seems to have a got linux mpi version working (or at least no one has
> > > answered
> > > previous posts). The system I'm working on now takes only 28 minutes in
> > > lapw1,
> > > but 40 minutes in lapw0 (this is why mpi would be nice) and 1h20 in lapw2
> >
> >
> > P.Blaha
> > --------------------------------------------------------------------------
> > Peter BLAHA, Inst.f. Materials Chemistry, TU Vienna, A-1060 Vienna
> > Phone: +43-1-58801-15671 FAX: +43-1-58801-15698
> > Email: blaha at theochem.tuwien.ac.at WWW:
> > http://info.tuwien.ac.at/theochem/
> > --------------------------------------------------------------------------
> >
> > _______________________________________________
> > Wien mailing list
> > Wien at zeus.theochem.tuwien.ac.at
> > http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
> >
> >
>
>
> _______________________________________________
> Wien mailing list
> Wien at zeus.theochem.tuwien.ac.at
> http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
>
P.Blaha
--------------------------------------------------------------------------
Peter BLAHA, Inst.f. Materials Chemistry, TU Vienna, A-1060 Vienna
Phone: +43-1-58801-15671 FAX: +43-1-58801-15698
Email: blaha at theochem.tuwien.ac.at WWW: http://info.tuwien.ac.at/theochem/
--------------------------------------------------------------------------
More information about the Wien
mailing list