[Wien] Intel(R) Xeon Phi™ coprocessor

Laurence Marks L-marks at northwestern.edu
Thu Sep 12 16:50:21 CEST 2013


I will add that I have been told that with the latest Xeon (not sure
about i7), the memory speed can also matter.

One additional option, the one I now use, is to have a vendor put
together a small cluster for you. At least in the US this seems to be
competitive as they get a discount on the price. My procedure if they
have options is to benchmark lapw1 on a test machine (ideally machines
and mpi) to compare options as well as different vendors. At least at
the higher Xeon end I cannot invest the time to learn all the
(constantly changing) hardware details and prefer to leave that to
experts. I also get technical support included without extra cost,
which can be very important.

As one example, the list Michael just sent does not include the latest
v2 models (too new). I will shortly be benchmarking some of these,
they look cheaper but....

On Thu, Sep 12, 2013 at 9:06 AM, Luis Ogando <lcodacal at gmail.com> wrote:
>    Thank you, Michael ! This will be useful too !
>    Nevertheless, we can not forget that, sometime ago, Prof. Blaha commented
> that the "Cache Memory" is more important than the "clock" for WIEN2K
> (discarding extreme cases, of course).
>    All the best,
>                    Luis
>
>
> 2013/9/12 Michael Sluydts <michael.sluydts at ugent.be>
>>
>> While I'm not sure how easily normal desktop benchmarks transfer to
>> parallel processing through wien2k I usually look at the following
>> benchmarks when comparing CPUs (and prices):
>>
>> http://cpubenchmark.net/high_end_cpus.html
>>
>>
>> Best regards,
>>
>> Michael Sluydts
>>
>> Op 12/09/2013 15:43, Luis Ogando schreef:
>>
>> Dear Prof. Blaha,
>>
>>    Thank you very much for the explanations. They will be very useful !!
>>    All the best,
>>                         Luis
>>
>>
>> 2013/9/12 Peter Blaha <pblaha at theochem.tuwien.ac.at>
>>>
>>> This depends a lot on what you want to do and how much money you have.
>>>
>>> The single-core speed of a fast I7 is at least as good (or faster) than
>>> most Xeons, and they are MUCH cheaper. So for all systems up to 64-100
>>> atoms/cell, where you need several k-points, a small cluster of I7 cpus
>>> /4 cores, or more expensive 6 core) (with GB-network and a common NFS
>>> file system) is for sure the fastest platform and in particular has by FAR
>>> the best price/performance ratio (One powerful I7 computer may cost about
>>> 1000 Euro). For bigger clusters, a drawback can be the large "space" to put
>>> all PCs on a big shelf ....), but if you have less than 10000 Euros, this is
>>> probably the best choice.
>>>
>>> However, Xeons can be coupled (2-4 Xeons) to a "single multicore computer
>>> (eg. 16 cores)", which may work with mpi and can be used to handle systems
>>> up to 200-300 atoms. They also can be bought in small boxes and may fit in a
>>> single 19 inch cabinet. But of course such systems are much more expensive.
>>> From what I said above it should be clear, that it is completely useless to
>>> buy a "single 4 core Xeon computer".
>>>
>>> The next step would be to buy an Infiniband switch+cards and couple your
>>> PCs with this fast network to a powerful multinode mpi-cluster. Since the
>>> switch/cards are fairly expensive, on usually takes here Xeons as platform.
>>> However, you need to know how to install/configure the software properly.
>>> I've seen such clusters even in computing centers, which were completely
>>> useless, because the network/mpi was instable and jobs would crash randomly
>>> every couple of hours .....
>>>
>>> Our strategy:
>>> i) We have a GB-networked cluster with Intel I7 computers (which we
>>> maintain our-self and this cluster includes also all the user- workstations)
>>> and do all the calculations for systems up to 64 atoms/cell on these
>>> systems.
>>> 2) For bigger systems we go to our University computer-center and run
>>> there with a PBS queuing system. This has the advantage that we do not need
>>> to care about the installation of the infiniband network nor the
>>> mpi-infrastructure (but we use always intel-mpi together with ifort/mkl).
>>>
>>>
>>> On 09/11/2013 06:31 PM, Luis Ogando wrote:
>>>>
>>>> Dear Prof. Blaha,
>>>>
>>>>     Just for curiosity, what processor did you buy ?
>>>>     Is the Xeon family better than the i7 one for WIEN2k calculations ?
>>>>     All the best,
>>>>                         Luis
>>>>
>>>>
>>>> 2013/9/11 Peter Blaha <pblaha at theochem.tuwien.ac.at
>>>> <mailto:pblaha at theochem.tuwien.ac.at>>
>>>>
>>>>
>>>>     I don't know what "latest" means. We use the latest one installed on
>>>>     our supercomputers (4.1.1.036)
>>>>
>>>>     I have not seen any significant change with mpi in the last years.
>>>>
>>>>     PS: I just got info that we have now a new ifort available for
>>>>     download ...
>>>>
>>>>
>>>>     On 09/11/2013 05:00 PM, Laurence Marks wrote:
>>>>
>>>>         Thanks.
>>>>
>>>>         One thing I will add/ask concerning the parallelization, the
>>>> latest
>>>>         impi seems to be substantially better -- have you tried it? I
>>>>         have not
>>>>         just noticed this with Wien2k, but I am told that others have
>>>> seen
>>>>         improvements in other codes.
>>>>
>>>>         On Wed, Sep 11, 2013 at 9:42 AM, Peter Blaha
>>>>         <pblaha at theochem.tuwien.ac.at
>>>>         <mailto:pblaha at theochem.tuwien.ac.at>> wrote:
>>>>
>>>>             Before buying a couple of new computers, I was asking myself
>>>>             the same
>>>>             question and discussed this with some people of our
>>>>             computing departments.
>>>>
>>>>             The conclusions:
>>>>             a) potentially very good, but in practice very questionable,
>>>>             because for
>>>>             most application you cannot get out the real speed (10 times
>>>>             faster than
>>>>             an Intel I7). This is true, even for many lapack/mkl
>>>>             subroutines where
>>>>             it "should" work better.
>>>>             They told me to "wait", until the mkl becomes better
>>>>             (hopefully). I'm
>>>>             not too optimistic, when you see how badly the
>>>>             mkl-parallelization of
>>>>             multicore machines is working (2 cores is very good, but 4
>>>>             or more is
>>>>             already very bad).
>>>>
>>>>             b) The nature of our problem (big eigenvalue problem): A
>>>> "fast
>>>>             processor" is useful only for large problems --> large
>>>> memory.
>>>>             You can buy Phi coprocessors now with quite some large
>>>>             memory, but then
>>>>             they are terrible expensive (and 5 "normal" PCs are faster
>>>>             and cheaper)
>>>>
>>>>             c) the hardware design has a VERY slow communication between
>>>>             main-memory
>>>>             and Phi-memory. This makes also parallelization over several
>>>>             PHI-nodes
>>>>             via mpi not really possible (if you need any significant
>>>>             data transfer,
>>>>             like for an eigenvalue problem).
>>>>
>>>>             Thus I did not buy it.
>>>>
>>>>             However, if anybody has access and time to try out WIEN2k on
>>>>             PHis, I'd
>>>>             would be very interested in getting feedback. (Maybe these
>>>>             computer-people were not good enough ....)
>>>>
>>>>             PS: I know from G.Kresse that they had some time ago (maybe
>>>>             2 years ?)
>>>>             an expert from NVIDIA with them. After 2 weeks of porting
>>>>             VASP to these
>>>>             GPUs by this expert, VASP on the GPU was "almost as fast" as
>>>>             on an Intel
>>>>             I7 processor.
>>>>
>>>>
>>>>             On 09/11/2013 04:16 PM, Laurence Marks wrote:
>>>>
>>>>                 Anyone know if these will be viable with Wien2k (mpi,
>>>>                 i.e. large problems)?
>>>>
>>>>
>>>>             --
>>>>
>>>>                                                      P.Blaha
>>>>
>>>> ------------------------------__------------------------------__--------------
>>>>
>>>>             Peter BLAHA, Inst.f. Materials Chemistry, TU Vienna, A-1060
>>>>             Vienna
>>>>             Phone: +43-1-58801-165300 <tel:%2B43-1-58801-165300>
>>>>                  FAX: +43-1-58801-165982 <tel:%2B43-1-58801-165982>
>>>>             Email: blaha at theochem.tuwien.ac.at
>>>>             <mailto:blaha at theochem.tuwien.ac.at>    WWW:
>>>>             http://info.tuwien.ac.at/__theochem/
>>>>             <http://info.tuwien.ac.at/theochem/>
>>>>
>>>> ------------------------------__------------------------------__--------------
>>>>             _________________________________________________
>>>>             Wien mailing list
>>>>             Wien at zeus.theochem.tuwien.ac.__at
>>>>             <mailto:Wien at zeus.theochem.tuwien.ac.at>
>>>>             http://zeus.theochem.tuwien.__ac.at/mailman/listinfo/wien
>>>>
>>>>             <http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien>
>>>>             SEARCH the MAILING-LIST at:
>>>>
>>>> http://www.mail-archive.com/__wien@zeus.theochem.tuwien.ac.__at/index.html
>>>>
>>>> <http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>     --
>>>>
>>>>                                            P.Blaha
>>>>
>>>> ------------------------------__------------------------------__--------------
>>>>
>>>>     Peter BLAHA, Inst.f. Materials Chemistry, TU Vienna, A-1060 Vienna
>>>>     Phone: +43-1-58801-165300 <tel:%2B43-1-58801-165300>
>>>>     FAX: +43-1-58801-165982 <tel:%2B43-1-58801-165982>
>>>>     Email: blaha at theochem.tuwien.ac.at
>>>>     <mailto:blaha at theochem.tuwien.ac.at>    WWW:
>>>>     http://info.tuwien.ac.at/__theochem/
>>>>     <http://info.tuwien.ac.at/theochem/>
>>>>
>>>> ------------------------------__------------------------------__--------------
>>>>     _________________________________________________
>>>>     Wien mailing list
>>>>     Wien at zeus.theochem.tuwien.ac.__at
>>>>     <mailto:Wien at zeus.theochem.tuwien.ac.at>
>>>>     http://zeus.theochem.tuwien.__ac.at/mailman/listinfo/wien
>>>>
>>>>     <http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien>
>>>>     SEARCH the MAILING-LIST at:
>>>>
>>>> http://www.mail-archive.com/__wien@zeus.theochem.tuwien.ac.__at/index.html
>>>>
>>>> <http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> Wien mailing list
>>>> Wien at zeus.theochem.tuwien.ac.at
>>>> http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
>>>> SEARCH the MAILING-LIST at:
>>>> http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html
>>>>
>>>
>>> --
>>>
>>>                                       P.Blaha
>>>
>>> --------------------------------------------------------------------------
>>> Peter BLAHA, Inst.f. Materials Chemistry, TU Vienna, A-1060 Vienna
>>> Phone: +43-1-58801-165300             FAX: +43-1-58801-165982
>>> Email: blaha at theochem.tuwien.ac.at    WWW:
>>> http://info.tuwien.ac.at/theochem/
>>>
>>> --------------------------------------------------------------------------
>>> _______________________________________________
>>> Wien mailing list
>>> Wien at zeus.theochem.tuwien.ac.at
>>> http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
>>> SEARCH the MAILING-LIST at:
>>> http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html
>>
>>
>>
>>
>> _______________________________________________
>> Wien mailing list
>> Wien at zeus.theochem.tuwien.ac.at
>> http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
>> SEARCH the MAILING-LIST at:
>> http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html
>>
>>
>>
>> _______________________________________________
>> Wien mailing list
>> Wien at zeus.theochem.tuwien.ac.at
>> http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
>> SEARCH the MAILING-LIST at:
>> http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html
>>
>



-- 
Professor Laurence Marks
Department of Materials Science and Engineering
Northwestern University
www.numis.northwestern.edu 1-847-491-3996
"Research is to see what everybody else has seen, and to think what
nobody else has thought"
Albert Szent-Gyorgi


More information about the Wien mailing list