[Wien] question about how to continue calculation on a supercomputer with unix system

Stefaan Cottenier Stefaan.Cottenier at fys.kuleuven.be
Thu Dec 6 20:28:33 CET 2007


Other people might have smarter strategies, but one that works is to  
add "-i 1 -NI" to your run_lapw command (= run only 1 iteration and  
reuse the broyden files if there are any). In this way, every  
iteration is technically independent of what came before, and all info  
about the previous iterations is used and read from files. It avoids  
that you have to let your calculation crash due to exceeding the wall  
time limit.

It requires that at the end of your pbs script you set the command to  
submit this very same pbs script again. In this way, iteration after  
iteration is executed, in an endless loop. You have to watch it (once  
per day?) to decide whether convergence is reached and stop it by e.g.  
removing the pbs script. It will then "crash" in a clean way after the  
current iteration.

If you inspect the wall time used after the first iteration (by having  
the dayfile copied within the pbs script, for instance), you can adapt  
the required wall time accordingly. This will ensure you do not ask  
more walltime than you need, and on most systems this means your jobs  
will spend less time in the queue.

Memory usage does not depend on the number of k-points, only on the  
matrix size (=basis set, number of atoms,...). If you need to reduce  
your memory usage, you have to use mpi.

Stefaan



Quoting Dong Su <Dong.Su at asu.edu>:

> Dear Peter and Wien2k community:
> I tried to run wien2k on a supercomputer in our institute.(The   
> information of the computer can be found at:
> http://hpc.asu.edu/hpcusersupport.php#running ) I installed it on   
> the headnode and it was working fine.
> For doing parallel calculation I have to use a "Job Scripts" as following:
>
> #!/bin/bash
> #PBS -l nodes=8
> #PBS -q devel
> #PBS -j oe
> #PBS -o Example.output
> #PBS -l walltime=12:00:00
>
> cd $PBS_O_WORKDIR
> mpiexec /home/user/mpiprogram
>
> meanings:
>     *  #!/bin/bash the shell to run the script under.
>     * #PBS -l nodes=8 sets how many processors to run on
>     * #PBS -q devel sets the queue to run in
>     * #PBS -j oe Combine stdout and stderr into the same file
>     * #PBS -o Example.output redirects the output from your job to   
> Example.output
>     * #PBS -l walltime=12:00:00 sets your walltime estimate to 12   
> hours. The time takes the form Days:Hours:Minutes:Seconds
>     * cd $PBS_O_WORKDIR makes sure the job run from the directory   
> where it was submitted from.
>     * mpiexec ... or .../yourprogram is the program to run. If you   
> need to use something, put the use command before this line.
>
> There is a limitation on the walltime: around 40 hours. After   
> running 40 hours, the job will be killed automatically.
> If I want to calculate a complex structure which needs a long time.
> My question is may I continue to run the work after the job was   
> killed? (like VASP which returns the results during calculation  )
> I may underestimate the walltime I need for the calculation. I may   
> continue my calculation based on the result I already have when the   
> job was killed.
>
> Another question is the memory limitations for these four queues are  
>  1GB/processor by default. In this case ,how many k_points can I  
> use?  Does the k-points limitation change with the number of  
> nodes(let us  say: any difference bewteen 8 nodes and 32 nodes)?
>
>
> Thank you very much!
>
>
>
> Dong SU,Ph.D
> Dept. Of Physics, Arizona State University
> P.O. Box 871504
> Tempe, Arizona 85287-1504
> Phone: 480-965-6327
> Fax: 480-965-7954
> E-mail: dong.su at asu.edu
>
> _______________________________________________
> Wien mailing list
> Wien at zeus.theochem.tuwien.ac.at
> http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
>
>



-- 
Stefaan Cottenier
Instituut voor Kern- en Stralingsfysica
K.U.Leuven
Celestijnenlaan 200 D
B-3001 Leuven (Belgium)

tel: + 32 16 32 71 45
fax: + 32 16 32 79 85
e-mail: stefaan.cottenier at fys.kuleuven.be


Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm



More information about the Wien mailing list