[Wien] WARNING During SCF Calculations
sandeep Kumar
sandeepk.phy at gmail.com
Sun Nov 5 15:26:56 CET 2017
Dear Professor Peter Blaha and WIEN2k users,
According to your suggestion, I have changed NMATMAX value and recompile it
but still, I am facing the same problem.
These are the details which I have used during installations of
WIEN2k_17.1 version:
WIEN_2K COMPLIER:
fortran:ifort
c:icc
parallel:mpiifort
WIEN2k_OPTIONS
current:FOPT:-O1 -FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML
-traceback -assume buffered_io -I$(MKLROOT)/include
current:FPOPT:-O1 -FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML
-traceback -assume buffered_io -I$(MKLROOT)/include
current:LDFLAGS:$(FOPT) -L$(MKLROOT)/lib/$(MKL_TARGET_ARCH) -pthread
current:DPARALLEL:'-DParallel'
current:R_LIBS:-lmkl_lapack95_lp64 -lmkl_intel_lp64 -lmkl_intel_thread
-lmkl_core -openmp -lpthread
current:FFTWROOT:/home/qnt/sandeep/fftw/
current:FFTW_VERSION:FFTW3
current:FFTW_LIB:lib
current:FFTW_LIBNAME:fftw3
current:LIBXCROOT:/home/qnt/sandeep/libxc/
current:LIBXC_FORTRAN:xcf03
current:LIBXC_LIBNAME:xc
current:SCALAPACKROOT:/opt/intel/composer_xe_2013_sp1.0.080/mkl/lib/
current:SCALAPACK_LIBNAME:mkl_scalapack_lp64
current:BLACSROOT:/opt/intel/composer_xe_2013_sp1.0.080/mkl/lib/
current:BLACS_LIBNAME:mkl_blacs_intelmpi_lp64
current:ELPAROOT:
current:ELPA_VERSION:
current:MPIRUN:mpirun -np _NP_ -machinefile _HOSTS_ _EXEC_
current:CORES_PER_NODE:1
current:MKL_TARGET_ARCH:intel64
$WIENROOT/SRC_lapw1/param.inc:
!
! Constant parameter definition
!
INTEGER LMAX, LMMX, LOMAX, restrict_output
INTEGER NATO, NDIF, NKPTSTART, NMATMAX, NRAD
INTEGER NSYM, NUME,NVEC1, NVEC2, NVEC3
INTEGER NWAV,NMATIT,NUMEIT,HB,NMATBL,nloat
PARAMETER (LMAX= 13)
PARAMETER (LMMX= 120)
PARAMETER (LOMAX= 3)
PARAMETER (NKPTSTART= 100)
PARAMETER (NMATMAX= 40000)
PARAMETER (NRAD= 881)
PARAMETER (NSYM= 48)
PARAMETER (NUME= 8000)
PARAMETER (NVEC1= 35)
PARAMETER (NVEC2= 35)
PARAMETER (NVEC3= 95)
! PARAMETER (nloat= 60) ! should be 3 for only one LO
PARAMETER (RESTRICT_OUTPUT= 9999) ! 1 for mpi with less
output-files
~
I have also got the file parallel_option which has the contains below:
setenv TASKSET "no"
if ( ! $?USE_REMOTE ) setenv USE_REMOTE 0
if ( ! $?MPI_REMOTE ) setenv MPI_REMOTE 0
setenv WIEN_GRANULARITY 1
setenv DELAY 0.1
setenv SLEEPY 1
setenv WIEN_MPIRUN "mpirun -np _NP_ -machinefile _HOSTS_ _EXEC_"
setenv CORES_PER_NODE 1
~
Our cluster memory is 132 GB. We have 33 compute nodes and 592 compute
cores.
Please correct me if I am wrong in terms of parallel installation as well
as NMATMAX. Please suggest me so that I could get rid of this problem. I
would be thankful.
Thanks
Sandeep Kumar
--
Dr. Sandeep Kumar, Post-doc
Department of Chemistry,
The Lise Meitner-Minerva Center for Computational Quantum Chemistry &
The Institute for Nanotechnology and Advanced Materials,
Bar-Ilan University, Ramat-Gan 52900, Israel
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://zeus.theochem.tuwien.ac.at/pipermail/wien/attachments/20171105/734296b8/attachment.html>
More information about the Wien
mailing list