[Wien] scratch folder in k-point parallel calculation
Oleg Rubel
orubel at lakeheadu.ca
Fri Jun 28 04:46:38 CEST 2013
If you are talking about a mixed (MPI & k-parallel) job, there are
suggestions in FAQs: http://www.wien2k.at/reg_user/faq/pbs.html
I use SGE scheduler and the following parallel_options file:
setenv USE_REMOTE 0
setenv MPI_REMOTE 0
setenv WIEN_GRANULARITY 1
setenv WIEN_MPIRUN "mpiexec -machinefile _HOSTS_ -n _NP_ _EXEC_"
Eventually, it works with the local SCRATCH directory. I can share a
submission script for the mixed MPI+k parallel job, if needed.
Oleg
On 13-06-27 4:46 PM, Yundi Quan wrote:
> I forgot to add that after lapw2, all the output files should be written
> to a shared folder (not to scratch folder of each node) so that sumpara
> -p could access all the output files. In the version that I'm using
> (11.0), it seems that lapw1 is not executed locally.
>
>
> Yundi
>
> On Thu, Jun 27, 2013 at 1:37 PM, Yundi Quan <yquan at ucdavis.edu
> <mailto:yquan at ucdavis.edu>> wrote:
>
>
>
> ---------- Forwarded message ----------
> From: *Stefaan Cottenier* <Stefaan.Cottenier at ugent.be
> <mailto:Stefaan.Cottenier at ugent.be>>
> Date: Thu, Jun 27, 2013 at 1:33 PM
> Subject: Re: [Wien] scratch folder in k-point parallel calculation
> To: A Mailing list for WIEN2k users <wien at zeus.theochem.tuwien.ac.at
> <mailto:wien at zeus.theochem.tuwien.ac.at>>
>
>
>
> I'm working on a cluster with many nodes. Each node has its own
> /scratch
> folder which cannot be accessed by other nodes. My own data
> folder is
> accessible to all nodes. When the WIEN2k scratch folder is set
> to './',
> everything works fine except that all the data write/read are
> done in my
> own data folder which slows down the system. When the WIEN2k scratch
> folder is set to '/scratch', then the scratch files such as
> case.vector_
> created on one node would not be visible to the other. This
> would not be
> a problem for lapw1 -p and lapw2 -p if each node sticks to certain k
> points and searches for scratch files related to these k-points. Is
> there a way of configuring WIEN2k to work in this manner?
>
>
> The k-point parallelization scheme works exactly in this way (mpi
> parallelization is different).
>
> STefaan
>
>
> _________________________________________________
> Wien mailing list
> Wien at zeus.theochem.tuwien.ac.__at
> <mailto:Wien at zeus.theochem.tuwien.ac.at>
> http://zeus.theochem.tuwien.__ac.at/mailman/listinfo/wien
> <http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien>
> SEARCH the MAILING-LIST at:
> http://www.mail-archive.com/__wien@zeus.theochem.tuwien.ac.__at/index.html
> <http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html>
>
>
>
>
> _______________________________________________
> Wien mailing list
> Wien at zeus.theochem.tuwien.ac.at
> http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
> SEARCH the MAILING-LIST at: http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html
>
More information about the Wien
mailing list