[Wien] How to do Site Configuration In Ubuntu 12.04
vishal jain
vjain045 at gmail.com
Fri Apr 5 06:47:33 CEST 2013
I do as you mention but observed same error
SRC_sumhfpara/compile.msg:make: *** [complex] Error 2
SRC_sumpara/compile.msg:make: *** [errclr.o] Error 127
SRC_supercell/compile.msg:make: *** [supercell.o] Error 127
SRC_symmetry/compile.msg:make: *** [symmetry.o] Error 127
SRC_symmetso/compile.msg:make: *** [symmetso.o] Error 127
SRC_telnes3/compile.msg:make: *** [modules.o] Error 127
SRC_tetra/compile.msg:make: *** [reallocate.o] Error 127
SRC_trig/compile.msg:make: *** [rhomb_in5.o] Error 127
SRC_txspec/compile.msg:make: *** [reallocate.o] Error 127
SRC_vecpratt/compile.msg:make[1]: *** [vecpratt.o] Error 127
SRC_vecpratt/compile.msg:make: *** [real] Error 2
SRC_vecpratt/compile.msg:make[1]: *** [vecpratt.o] Error 127
SRC_vecpratt/compile.msg:make: *** [complex] Error 2
SRC_structeditor/SRC_ncmsymmetry/compile.msg:make: *** [module.o] Error 127
SRC_structeditor/SRC_readwrite/compile.msg:make: *** [module.o] Error 127
SRC_structeditor/SRC_struct2mol/compile.msg:make: *** [reallocate.o] Error
127
SRC_structeditor/SRC_structgen/compile.msg:make: *** [module.o] Error 127
in P configure Parallel execution i choose
**********************************
* Configure parallel execution *
**********************************
These options are stored in parallel_options of WIENROOT
You can change them later also manually.
Do you use ONLY a shared memory parallel architecture (ONE single
multi-core
node) ?
On shared memory system it is normally better to start jobs in the
background rather than using remote commands. If you select a shared
memory
system WIEN will by default not use remote shell commands
(USE_REMOTE and MPI_REMOTE = 0 in parallel_options)
and set the default granularity to 1.
You still can override this default granularity in your .machines file.
You may also set a specific TASKSET command to bind your executables
to a specific core on multicore machines.
Shared Memory Architecture? (y/n):y
Do you know/need a command to bind your jobs to specific nodes ?
(like taskset -c). Enter N / your_specific_command: n
On most mpi-2 versions, it is better to start an mpijob on the original
machine
and not via ssh on a remote system. If you are using mpi2 set MPI_REMOTE
to 0 Set MPI_REMOTE to 0 / 1: 0
********************************************************
Do you have MPI and Scalapack installed and intend to run
finegrained parallel? (This is usefull only for BIG cases
(50 atoms and more / unit cell) and you need to know details
about your installed mpi and fftw )
(y/n) n
On Fri, Apr 5, 2013 at 9:42 AM, Gavin Abo <gsabo at crimson.ua.edu> wrote:
> Try changing the Linker and R_LIB lines to:
>
> Linker Flags:$(FOPT) -L/opt/intel/composerxe/mkl/lib/ia32 -pthread
> R_LIB (LAPACK+BLAS):-lmkl_lapack95 -lmkl_intel -lmkl_intel_thread
> -lmkl_core -openmp -lpthread
>
>
> On 4/4/2013 9:52 PM, vishal jain wrote:
>
> Dear Sir
>
> I found error on compiling (R=Compile and Recompile)
> .
>
> SRC_structeditor/SRC_ncmsymmetry/compile.msg:make: *** [module.o] Error 127
> SRC_structeditor/SRC_readwrite/compile.msg:make: *** [module.o] Error 127
> SRC_structeditor/SRC_struct2mol/compile.msg:make: *** [reallocate.o] Error
> 127
> SRC_structeditor/SRC_structgen/compile.msg:make: *** [module.o] Error 127
>
>
> In site configration i choose following comands
> S specify a system I choose I(ifort + MKL because i have installed
> l_mkl_11.0.2.146 and l_fcompxe_2013.3.163)
> )
> C i choose ifort + cc
> O shown below how to define path
>
>
> *********************************************************
> * W I E N *
> * site configuration *
> *********************************************************
>
> Last configuration: Fri Apr 5 09:14:26 IST 2013
> Wien Version: WIEN2k_12.1 (Release 22/7/2012)
> System: linuxifc
>
>
> S specify a system
> C specify compiler
> O specify compiler options, BLAS and LAPACK
> P configure Parallel execution
> D Dimension Parameters
> R Compile/Recompile
> U Update a package
> L Perl path (if not in /usr/bin/perl)
> Q Quit
>
> Selection: O
>
> ******************************
> * Specify compiler options *
> ******************************
>
> PLEASE NOTE: Best performance can be obtained with processor specific
> options.
> Very important for speed-up is a optimized BLAS (like mkl, essl, ..),
> or at least the GOTO- or ATLAS-BLAS instead of the simple "-lblas_lapw"
>
> For more info see http://www.wien2k.at/reg_user/faq
> searching ....
> I could not find the mkl-library because MKLROOT is not defined.
> Please check whether mkl is installed at all and where
> (mkl is included in new ifort versions, see www.intel.com )
> Without mkl you should install the GOTO-blas or you must use the blas_lapw
> library (performance loss)
> Hit Enter to continue
> Since intel changes the name of the mkl-libraries from version to version,
> you may find the linking options for the most recent ifort version at
> http://software.intel.com/en-us/articles/intel-mkl-link-line-advisor/
>
> Recommended options for system linuxifc are:
> Compiler options: -FR -mp1 -w -prec_div -pc80 -pad -ip
> -DINTEL_VML -traceback
> Linker Flags: $(FOPT) -L$(MKLROOT)/lib/$(MKL_TARGET_ARCH)
> -pthread
> Preprocessor flags: '-DParallel'
> R_LIB (LAPACK+BLAS): -lmkl_lapack95_lp64 -lmkl_intel_lp64
> -lmkl_intel_thread -lmkl_core -openmp -lpthread
>
> Current settings:
> O Compiler options: -FR -mp1 -w -prec_div -pc80 -pad -ip
> -DINTEL_VML -traceback
> L Linker Flags: $(FOPT) -pthread -static
> P Preprocessor flags '-DParallel'
> R R_LIB (LAPACK+BLAS): -lmkl_lapack95_ia32 -lmkl_intel_ia32
> -lmkl_intel_thread -lmkl_core -openmp -lpthread
>
> S Save and Quit
> Q Quit abandon changes
>
> To change an item select option.
>
> Selection:
>
>
>
> _______________________________________________
> Wien mailing list
> Wien at zeus.theochem.tuwien.ac.at
> http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://zeus.theochem.tuwien.ac.at/pipermail/wien/attachments/20130405/1faaaff0/attachment.htm>
More information about the Wien
mailing list