[Wien] System configuration
Pavel Ondračka
pavel.ondracka at email.cz
Thu May 23 07:51:53 CEST 2019
Hi Indranil,
While the k-point parallelization is usually the most efficient
(provided you have sufficient number of k-points) and does not need any
extra libraries, for 100atoms case it might be problematic to fit 12
processes into 32GB of memory. I assume you are already using it since
you claim to run on two cores?
Instead check what is the maximum memory requirement of lapw1 when run
in serial and based on that find how much processes you can run in
parallel, than for each place one line "1:localhost" into .machines
file (there is no need to copy .machines from templates, or use random
scripts, instead read the userguide to understand what you are doing,
it will save you time in the long run). If you can run at least few k-
points in parallel it might be enough to speed it up significantly.
For MPI you would need openmpi-devel scalapack-devel and fftw3-devel
(I'm not sure how exactly are they named on Ubuntu) packages.
Especially the scalapack configuration could be tricky, it is probably
easiest to start with lapw0 as this needs only MPI and fftw.
Also based on my experience with default gfortran settings, it is
likely that you don't have even optimized the single core performance,
try to download the serial benchmark
http://susi.theochem.tuwien.ac.at/reg_user/benchmark/test_case.tar.gz
untar, run x lapw1 and report timings (on average i7 CPU it should take
below 30 seconds, if it takes significantly more, you will need some
more tweaks).
Best regards
Pavel
On Thu, 2019-05-23 at 10:42 +0530, Dr. K. C. Bhamu wrote:
> Hii,
>
> If you are doing k-point parallel calculation (having number of k-
> points in IBZ more then 12) then use below script on terminal where
> you want to run the calculation or use in your job script with -p
> option in run(sp)_lapw (-so).
>
> if anyone knows how to repeat a nth line m times in a file then this
> script can be changed.
>
> Below script simply coping machine file from temple directory and
> updating it as per your need.
> So you do not need copy it, open it in your favorite editor and do it
> manually.
>
> cp $WIENROOT/SRC_templates/.machines . ; grep localhost .machines |
> perl -ne 'print $_ x 6' > LOCALHOST.dat ; tail -n 2 .machines >
> grang.dat ; sed '22,25d' .machines > MACHINE.dat ; cat MACHINE.dat
> localhost.dat grang.dat > .machines ; rm LOCALHOST.dat MACHINE.dat
> grang.dat
>
> regards
> Bhamu
>
>
> On Wed, May 22, 2019 at 10:52 PM Indranil mal <indranil.mal at gmail.com
> > wrote:
> > respected sir/ Users,
> > I am using a PC with intel i7 8th gen (with 12
> > cores) 32GB RAM and 2TB HDD with UBUNTU 18.04 LTS. I have installed
> > OpenBLAS-0.2.20 and using GNU FORTRAN and c compiler. I am trying
> > to run a system with 100 atoms only two cores are using the rest of
> > them are idle and the calculation taking a too long time. I have
> > not installed mpi ScaLAPACK or elpa. Please help me what should I
> > do to utilize all of the cores of my cpu.
> >
> >
> >
> > Thanking you
> >
> > Indranil
> > _______________________________________________
> > Wien mailing list
> > Wien at zeus.theochem.tuwien.ac.at
> > http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
> > SEARCH the MAILING-LIST at:
> > http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html
>
> _______________________________________________
> Wien mailing list
> Wien at zeus.theochem.tuwien.ac.at
> http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
> SEARCH the MAILING-LIST at:
> http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html
More information about the Wien
mailing list