[Wien] Computer resources for a 64-processor cluster

ROBERTO LUIS IGLESIAS PASTRANA roberto at uniovi.es
Tue Feb 28 15:22:40 CET 2006


Dear Dr. Andersen

Thanks a lot for your reply. This helps me a lot for a first approach to the question. Of course, more trouble may come when the tests have begun.

Best regards

Roberto Iglesias

----- Mensaje original -----
De: Torsten Andersen <thor at physik.uni-kl.de>
Fecha: Martes, Febrero 28, 2006 12:33 pm
Asunto: Re: [Wien] Computer resources for a 64-processor cluster

> Dear Mr. Iglesias,
> 
> this can only be answered by a test run... the required memory, and 
> especially the disk space depends very much on the number of atoms, 
> the 
> type of atoms, the cutoff values in case.in1c and case.in2c, and 
> the 
> number of k-points.
> 
> For the memory, my experience says that 2GB per processor is 
> sufficient 
> under k-point parallelization.
> 
> Disk space... depends on your calculation, but a few hundred GB 
> should 
> be sufficient for most cases.
> 
> CPU time I can not say anything about... you would need to run a 
> test 
> run. Make a test for 1 k-point on one CPU and scale it linearly 
> (plus 
> epsilon). For k-point parallelization you need at least as many k-
> points 
> as processors.
> 
> For MPI-jobs, the linear scaling doesn't work, since also your 
> network 
> speed has to be taken into account... don't use it if you can avoid 
> it.
> 
> Best regards,
> Torsten Andersen.
> 
> ROBERTO LUIS IGLESIAS PASTRANA wrote:
> > Hi all!
> > 
> > I could not find in the FAQs or in the mailing list whether this 
> question has been answered before or not, so please excuse me if I 
> am asking something that has been asked before
> > We've got a 64-processor cluster at my institution and the 
> administrator wants to know which will be the RAM and disk space 
> usage and the CPU time needed for the parallelization jobs we 
> intend to perform. Basically, we aim at studying energetics 
> involving both strucural and volume relaxation in magnetic binary 
> alloy systems, through a supercell approach. I know that NMATMAX 
> should be larger than 10000, in order to use fine grain 
> parallelization, but I have never performed it myself, so I don't 
> really have an estimation on the above parameters.
> > If anybody is willing to help, I would be much grateful.
> > 
> > Roberto Iglesias
> > 
> > _______________________________________________
> > Wien mailing list
> > Wien at zeus.theochem.tuwien.ac.at
> > http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
> > 
> 
> -- 
> Dr. Torsten Andersen        TA-web: http://deep.at/myspace/
> AG Hübner, Department of Physics, Kaiserslautern University
> http://cmt.physik.uni-kl.de    http://www.physik.uni-kl.de/
> _______________________________________________
> Wien mailing list
> Wien at zeus.theochem.tuwien.ac.at
> http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
>


More information about the Wien mailing list