[Wien] Low cpu usage on open-mosix Linux cluster

Torsten Andersen thor at physik.uni-kl.de
Mon Oct 18 14:38:21 CEST 2004


Dear Mr. Lombardi,

well, at least for lapw2, a fast file system (15k-RPM local disks with 
huge caches and hardware-based RAID-0) is essential to utilizing more 
than 1% of the CPU time... and if more than one process wants to access 
the same file system at the same time (e.g., parallel lapw2), this 
requirement becomes even more essential.

If you have problems to get lapw1 to run at 100% CPU-time, the system 
seems to be seriously misconfigured. I can think of two (there might be 
more) problems in the setup:

1. The scratch partition is NFS-mounted instead of local (and despite 
many manufacturers claims to the opposite, networked file systems are 
still VERY SLOW compared to local disks).

2. The system memory bandwidth is too slow, e.g., using DDR-266 with 
Xeons, or the memory is only connected to one CPU on Opterons.

In order to "diagnose" a little better we need to know the configuration 
in detail:-)

Best regards,
Torsten Andersen.

EB Lombardi wrote:
> Dear Wien users
> 
> When I run Wien2k on a Linux-openMosix cluster, lapw1 and lapw2 (k-point 
> parallel) processes mostly use a low percentage of the available CPU 
> time. Typically only 10-50% of each processor is used, with values below 
> 10% and above 90% also occuring. On the other hand single processes, 
> such as lapw0, etc, typically use 99.9% processor power of one 
> processor. On each node, (number of jobs) = (number of processors).
> 
> This low CPU utilizatioin does not occur on a dual processor linux 
> machine, where cpu utilization is mostly 99.9%.
> 
> Any suggestions on improving the CPU utilisation of lapw1c and lapw2 on 
> mosix clusters would be appreciated.
> 
> Regards
> 
> Enrico Lombardi


-- 
Dr. Torsten Andersen        TA-web: http://deep.at/myspace/
AG Hübner, Department of Physics, Kaiserslautern University
http://cmt.physik.uni-kl.de    http://www.physik.uni-kl.de/




More information about the Wien mailing list