[Wien] [Re-posting] Questions about calculating ELNES specturm with parallel option
Peter Blaha
pblaha at theochem.tuwien.ac.at
Fri Oct 16 22:55:58 CEST 2009
For parallelization:
Can you do: ssh localhost and get login WITHOUT being asked for
password ? If not, read the UG.
There is no logical explanation why you should not be able to calculate
TELNES not for higher energies. Check case.scf_1 (how many eigenvalues,
what are the largest ?) and case.output2 (from the "qtl" calculations,
it will tell you at the bottom, how many bands it has calculated and you
can check at the beginning of that file which energiy range this is.
For speed: If I understood you right, you used 1000k-points also for the
64 atom supercell ??? For a "32 times larger cell" only a much smaller
k-mesh (about 30 k-points) is needed.
Yi yoo soo schrieb:
> Dear, all users.
>
>
>
> I am sorry for the multiple postings. In my previous letter, it seems
> that my questions were not clearly presented. More detailed descriptions
> of the problem and system specifications are described below. I have
> been faced with a problems of calculating ELNES spectrum.
>
>
>
> *(1) **Checked crystalline system*
>
> I’ve calculated a ELENS spectrum using MgO to check whether Wien2k is
> properly compiled on my system. The tests include,
>
> a)_Single cell MgO without k-point parallelization option,
>
> b)_single cell MgO with parallel option,
>
> c)_2x2x2 super cell MgO with parallel option,
>
> and finally d) 2x2x2 super cell without parallel option.
>
>
>
> *(2) **ELNES calculation procedure*
>
> Used structural information: MgO structure with 2x2x2 super cell
> generated with x_supercell
>
> Details of setting (case.in* files): I just followed the setting from
> the *.pdf files uploaded in web page.
>
> init_lapw (with 1000 k-point, RKmax=6.5)
>
> edit case.in1 (increase the energy maximum values to calculate higher
> energy region in TELNES from default to 4.5 eV)
>
> run_lapw –it (use iterative diagonalization, convergence criteria is set
> to charge 0.0001)
>
> telnes (for Mg K-edge spectrum, setting edge onset, etc., energy grid
> from 0 eV to 35.0 eV)
>
> I did not set the broadening variables (just to check the possibility of
> calculating spectra)
>
> (almost other variables are used with default values)
>
>
>
> *(3) **Results from test*
>
> As test a), we’ve got the result, consistent with the results shown in
> the Wien2K tutorial (of latest conference file).
>
> As test b), we cannot get the desired result. Instead, the ELNES
> spectrum is calculated only up to 11 eV (note that we set the energy
> grid of up to 35 eV, I also changed emax value according in the case.in1
> file).
>
> As test c), it took about 5 days to complete the SCF cycles. It also
> produced the ELNES spectrum with only up to 11 eV (almost same as result
> of test ‘b’).
>
> As test d), a single SCF cycle for the run takes almost 10 hours, and
> thus so I was forced to quit the job. (I think that it takes too long
> time to run single SCF cycle comparing with other commercial code that I
> tested, although they are using different approximation methods defining
> wave functions.)
>
>
>
> *(4) **Related problems of these results*
>
> From those results of parallel computing, there must be some problems
> using k-point parallelization in my system, even though I tried to
> modify .machines file for my system (modified files are attached below).
>
>
>
> Are there any other parameters (e.g., emax, RMT radii, etc.) that I
> should change upon the calculation to get the correct results with large
> size super cell? (I could not find any related comments about dependence
> of calculation parameters on lattice parameter or number of atoms.)
>
>
>
> As for the calculation time, I just wonder whether this could be due to
> some problems associated with compilation options for my system (see
> below for the details).
>
>
>
> *(5) **Other questions (about RAM usage during lapw* calculation)*
>
> I set the NMATMAX and NUME parameters as suggested by the Wien2K manual;
> I calculated these values for a memory size of 24G.
>
>
>
> # NMATMAX = sqrt( RAM/10), RAM is the size of memory in byte.
>
> # NUME = NMATMAX/10
>
>
>
> However, when I type ‘top –c’ in the terminal, the RAM usage for each
> single program, (e.g. lapw1 and lapw2..) is only about 2~3 %. I think
> this is very low, and thus wonder whether it functions properly or not.
> However, when I use ‘free’ commend, the usage of memory is almost full.
> Please let me know how this program controls the memory usage of each
> single program.
>
>
>
>
>
> ------------------------------------------------------------------------
>
> *(a) **System specification*
>
> CPU: dual CPU (intel xeon quad core X5550)
>
> RAM: 24G
>
> OS: RHE 5.3
>
> Compiler: ifort 11
>
> MKL: intel 10.4
>
>
>
> *(b) **Modified .machines file*
>
> 1:localhost (total 8 lines of local host, because the total thread is 8)
>
> …
>
> 1:localhost
>
> granularity:1
>
> xxtrafine:1
>
>
>
> Are there any things what I have to attach this file? The problems of
> parallelization may be come from the misses of this file; and I just
> want to know that is it possible to use parallelization on my system
> (SMP) without installing MPI library?
>
>
>
> *(c) **The compilation options (copied from OPTIONS file)*
>
> current:FOPT:-FR -mp1 -w -prec_div -pc80 -pad -align -DINTEL_VML -O3
> -xSSE4.1
>
> current:FPOPT:$(FOPT)
>
> current:LDFLAGS:$(FOPT) -L/opt/intel/mkl/10.2.1.017/lib/em64t -pthread
>
> current:DPARALLEL:'-DParallel'
>
> current:R_LIBS:-L/opt/intel/mkl/10.2.1.017/lib/em64t -lmkl_lapack
> -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5
>
> current:RP_LIBS:-lmkl_intel_lp64 -lmkl_scalapack_lp64 -lmkl_blacs_lp64
> -lmkl_sequential -L /opt/local/fftw/lib/ -lfftw_mpi -lfftw
>
>
>
> Even the program works successfully without k-point parallelization,
> there could be some mistakes in these options.
>
> ------------------------------------------------------------------------
>
>
>
>
>
> Thank you very much for your help in advance, and I am sorry for the
> confusions and potential misunderstandings in my previous mail.
>
>
>
> Sincerely, Yoo Soo
>
>
>
>
>
> *School of Earth and Environmental Science, Seoul National University,
> Korea*
>
> *Yi yoo soo*
>
> *e-mail: yys2064 at snu.ac.kr OR yiyoosoo at gmail.com*
>
> *office: +82-2-877-3072*
>
> *mobile: +82-10-2655-2064*
>
>
>
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> Wien mailing list
> Wien at zeus.theochem.tuwien.ac.at
> http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
--
-----------------------------------------
Peter Blaha
Inst. Materials Chemistry, TU Vienna
Getreidemarkt 9, A-1060 Vienna, Austria
Tel: +43-1-5880115671
Fax: +43-1-5880115698
email: pblaha at theochem.tuwien.ac.at
-----------------------------------------
More information about the Wien
mailing list