[Wien] 2 XEON 5355 or 4 XEON 5150?

Peter Blaha pblaha at theochem.tuwien.ac.at
Thu Oct 25 10:06:38 CEST 2007


We could do, but
a) it is crucial that people use the same input (number of bands or
E-window) because the iter-timing depends on that

b) I do NOT expect big differences in performance behaviour between
full and iterative diagonalization except that the performance of the
blas-library is "less important" simply because the diagonalization time
is now much smaller, so hamilt and hns gets more important.(i.e. AMD 
will "benefit" from that)

c) I'd consider it more important to have also the timing of HAMILT, HNS 
and DIAG, because this allows to judge the performance of the compiler 
(HAMILT), of the blas-library (DIAG) and a "mixed" value (HNS).
(grep HORB *output1)

d) L.Marks suggested to put a new MPI-benchmark on the web and I think 
this would be useful. However, I'm not sure about the "size" of this 
bechmark. It could target "medium size" cases (eg. matrixsize 
10000-15000), which still could run on a single processor, but also 
should perform well on 2-16 nodes; or a big case which does not run on a 
usual single node (because of memory), but may scale to 100 (or more ?) 
processors.

Suggestions are wellcome.

> Peter, can we think also in looking the benchmark results of the new 
> iterrative diagonalization scheme.
> If you want that people use this approach for large system, it is 
> important to know how it performs on various architecture.
> Best regards
> Florent
> 
> 
> Alexander Shaposhnikov a écrit :
>> Hello Florent,
>>
>> On Monday 22 October 2007 15:35, Florent Boucher wrote:
>>   
>>> Dear Wien users,
>>>     
>> ***
>>   
>>> In that case, if you look the benchmark results below, it is not a good
>>> solution !
>>>
>>> bi-Xeon 5320 (overcl 2.67GHz)   
>>>     
>> ******
>>   
>>> You can see that even an overclocked solution run slower than an AMD
>>> when putting 8 jobs on it !
>>> The memory band width is much lower than on AMD, due to the memory
>>> architecture. It is better in that case to by more nodes with less
>>> expensive AMD chips for which the efficiency is much better when the
>>> load average is close to 100%.
>>>     
>>
>> I believe those are results i obtained for my machine :)
>> The scaling is very bad indeed, even though the machine has almost 20% faster 
>> memory than standart non-overclocked bi-Xeon X5355 (and the same CPU 
>> frequency)
>>
>>   
>>> For information, we have been able to buy a solution with 20 nodes from
>>> a well known company (1U, 2 x AMD 2.8GHz bicore, 8Go DDR2 per node,
>>> Infinipath DDR 4X, warranty 3 years) for less than 50k€ (without taxes).
>>> The power consumption for such a configuration will be less than 7kW/h.
>>>     
>>
>> You could have done better and pay less.
>> Typical 24-port infiniband switch is 5k€, 1 node based around Intel Q6600 with 
>> 8GB memory is 1k€ including infiniband mem-free network card, so 24k€ for 24 
>> nodes, 30k€ total. 
>> The cluster will have 920Gigaflops, and consume no more than 5KW.
>> And if you dare :) , you can always trouble-free overclock it to at least 
>> 3GHz, increasing to 1.15 Teraflops.
>> Scaling will be good enough, as the memory performance of Intel desktop 
>> chipsets is adequate. 
>>
>> Best Regards.
>> _______________________________________________
>> Wien mailing list
>> Wien at zeus.theochem.tuwien.ac.at
>> http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
>>
>>   
> 
> 
> -- 
>  -------------------------------------------------------------------------
> | Florent BOUCHER                    |                                    |
> | Institut des Matériaux Jean Rouxel | Mailto:Florent.Boucher at cnrs-imn.fr |
> | 2, rue de la Houssinière           | Phone: (33) 2 40 37 39 24          |
> | BP 32229                           | Fax:   (33) 2 40 37 39 95          |
> | 44322 NANTES CEDEX 3 (FRANCE)      | http://www.cnrs-imn.fr             |
>  -------------------------------------------------------------------------
> 
> 
> ------------------------------------------------------------------------
> 
> _______________________________________________
> Wien mailing list
> Wien at zeus.theochem.tuwien.ac.at
> http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien


More information about the Wien mailing list