[Wien] fine grained parallel execution (lapw1_mpi)

Jorissen Kevin Kevin.Jorissen at ua.ac.be
Fri Aug 29 09:16:31 CEST 2003


Hello Tom,
 
I didn't realize you were using ifc+mkl.  The mkl supports 'automatic parallellisation', i.e., you can allow the software compiled with mkl to run parallell without having to program the parallellisation yourself.
All you have to do is activate 'multithreading' or 'hyperthreading' ; I believe that for MKL 6 there is an environment variable OPM_NUM_THREADS or sth like that (check mkl release notes, technical notes, manual ...).  By default this multithreading behaviour is enabled for mkl 5.2 (took me some time to realize what was happening), and disabled for mkl 6 (which I'm using now).
Using this mkl-feature (they claim to have excellent scaling), maybe it's not necessary to bother about mpi+scalapack-compilation anymore, which seems to cause everyone trouble...
 
Kevin.
 
PS : It might be more limited in its application, though.  It can automatically determine how many processors one machine has, and then use all those (i.e., you execute lapw1, and it starts as many lapw1-jobs as there are processors), but I'm not sure that the mkl-blas etc. can be taught to use different machines for one job, like we can specify for mpi in the .machines-file by writing several machines-names on one line.  I haven't tried, so I don't know, really.
 
 

	-----Original Message----- 
	From: tom_y at livedoor.com [mailto:tom_y at livedoor.com] 
	Sent: Fri 8/29/2003 6:36 AM 
	To: wien at zeus.theochem.tuwien.ac.at 
	Cc: 
	Subject: [Wien] fine grained parallel execution (lapw1_mpi)
	
	

	Dear Wien2k deveopers and users,
	
	I tested the latest source file for lapw1, which was uploaded by prof.
	Blaha. But I got a same error as before, when I directly called
	lapw1_mpi.
	Again, error mesasge is as follows
	
	%mpirun -np 4 -machinefile m /home/wien2k/WIEN2k_03_3/lapw1_mpi
	lapw1_1.def
	 Using            4  processors, My ID =            0
	 Using            4  processors, My ID =            2
	 Using            4  processors, My ID =            1
	 Using            4  processors, My ID =            3
	FORTRAN STOP LOPW - Error
	FORTRAN STOP LOPW - Error
	FORTRAN STOP LOPW - Error
	FORTRAN STOP LOPW - Error
	
	I also tried to use mpirun -np 1 or 2, but same errors happened
	as the case for -np 4.
	
	My test case is TiC unit cell which is the same one as in the wien2k
	manual.
	
	Now I'm wondering whether the scalapack library has been compiled
	successfully or not. Because lapw0_mpi runs without problem.
	During siteconfig, I didn't get any errors.
	
	Do anyone have experiences to make scalapack library using intel ifc
	, icc and mkl on linux system?
	I'm also trying to use ATLAS library.
	
	I welcome any asuggestions.
	
	sincereley,
	
	Tom Yamamoto
	
	>> >Does lapw1 run in single mode ? LOPW errors often occur when your
	struct
	>> >file is wrong and some atoms are specified twice.
	>>
	>> Yes, serial mode lapw1 runs without problem, and k-point level
	parallel version also runs without problem.
	>
	>Ok, I hope you have used the same directory and identical inputs,...
	>
	>> I'd like to try it. I appreciate if you put it on the web site
	sooner.
	>
	>It is available at www.wien2k.at/reg_user/updates. Just download
	>SRC_lapw1.tar.gz
	>and use the update option of siteconfig.
	>
	>> %mpirun -np 4 -machinefile m /home/wien2k/WIEN2k_03_3/lapw1_mpi
	>> lapw1_1.def
	>>  Using            4  processors, My ID =            0
	>>  Using            4  processors, My ID =            2
	>>  Using            4  processors, My ID =            1
	>>  Using            4  processors, My ID =            3
	>> FORTRAN STOP LOPW - Error
	>> FORTRAN STOP LOPW - Error
	>> FORTRAN STOP LOPW - Error
	>> FORTRAN STOP LOPW - Error
	>>
	>> Same errors "LOPW  - Error" appeared as the case for lapw1.def.
	>>
	>> Do you have any suggestions to do at the next step?
	>
	>Try mpirun -np 1  and   2   instead of 4
	>
	>What is your testcase ? How large is the matrixsize (grep :RKM case.
	output1)
	>How many LOs do you have ?
	>
	>                                      P.Blaha
	>-----------------------------------------------------------------------
	---
	>Peter BLAHA, Inst.f. Materials Chemistry, TU Vienna, A-1060 Vienna
	>Phone: +43-1-58801-15671             FAX: +43-1-58801-15698
	>Email: blaha at theochem.tuwien.ac.at    WWW: http://info.tuwien.ac.
	at/theochem/
	>-----------------------------------------------------------------------
	---
	>
	>_______________________________________________
	>Wien mailing list
	>Wien at zeus.theochem.tuwien.ac.at
	>http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
	>
	
	
	フレッツ始めるなら今 !キャンペーン実施中!
	               http://www.livedoor.com/flets/
	
	
	_______________________________________________
	Wien mailing list
	Wien at zeus.theochem.tuwien.ac.at
	http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
	

-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/ms-tnef
Size: 10070 bytes
Desc: not available
Url : http://zeus.theochem.tuwien.ac.at/pipermail/wien/attachments/20030829/2b112ca4/attachment.bin


More information about the Wien mailing list