[Wien] Parallel execution & indenpendent scratch disks

Torsten Andersen thor at physik.uni-kl.de
Fri Nov 5 07:56:04 CET 2004


Dear Mr. Feindel,

Kevin Jorissen gave hints on this a few days ago. Look for mails by him 
in the mailing list. You will probably have to write some scripts to 
"control" the queueing system and abandon w2web for running the case 
(and use run[sp]_lapw instead).

Best regards,
Torsten Andersen.

Kirk Feindel wrote:
> Hi,
>  
> We "borrow" part of a homogeneous cluster.  Only the master node can be 
> accessed via w2web.  We do not want to run any calculations on the 
> master.  Also, we have to keep all data on the scratch disks which are 
> not shared between the master and nodes, or between nodes.  Is running a 
> k-point parallel job on multiple nodes, excluding the master, possible 
> without having to manually copy the files generated via w2web to the 
> scratch directory on each node to be used?
>  
> i.e., say I create a new job via w2web on the master in                 
> /scr/wien2k/case/
>  
> but plan to run the job on node1 and node2 only in                       
> /scr/wien2k/case/
>  
> and /scr/ on the master and respective nodes are not shared. 
>  
> Currently, I just manually copy my files on the master /scr/ to the NFS 
> shared /home/ directory, then on the node I want to run the job on, copy 
> the files from the shared /home/ directory to the nodes /scr/ directory. 
>  
> There must be better ideas.
>  
> Thanks
>  
> Kirk
> *   *   *   *   *   *   *   *   *   *   *   *   *   *   *   *   *   *
> Kirk Feindel
> E3-48 Gunning/Lemieux Chemistry Centre
> University of Alberta
> Edmonton, AB  T6G 2G2
> *   *   *   *   *   *   *   *   *   *   *   *   *   *   *   *   *   *

-- 
Dr. Torsten Andersen        TA-web: http://deep.at/myspace/
AG Hübner, Department of Physics, Kaiserslautern University
http://cmt.physik.uni-kl.de    http://www.physik.uni-kl.de/




More information about the Wien mailing list