Thank you for your all inputs.<div><br></div><div>I am running test on a system of 21 atoms, with spin polarized calculation, with 2 k-points, without inversion symmetry. Of course this test only with small system. So there would be no problem with the matrix size. The .machines file I have provided in my previous email.</div>
<div><br></div><div>Good news, the problem has been solved. By using:</div><div><br></div><div><span class="Apple-style-span" style="font-family: arial, sans-serif; font-size: 13px; border-collapse: collapse; ">lapw2_vector_split:$NCUS_per_MPI_JOB</span></div>
<div><font class="Apple-style-span" face="arial, sans-serif"><span class="Apple-style-span" style="border-collapse: collapse;"><br></span></font></div><div><span class="Apple-style-span" style="font-size: 13px; "></span><font class="Apple-style-span" face="arial, sans-serif"><span class="Apple-style-span" style="border-collapse: collapse;">I am able to finish the benchmark test with 1, 2, 4, 8, 16 CPUS (on the same nodes) by fully MPI or by hydrid K-parallel& MPI.<br>
</span></font><br></div><div>I am really not sure the way I do is correct. (<span class="Apple-style-span" style="font-family: arial, sans-serif; font-size: 13px; border-collapse: collapse; ">lapw2_vector_split:$NCUS_per_MPI_JOB)</span></div>
<div><font class="Apple-style-span" face="arial, sans-serif"><span class="Apple-style-span" style="border-collapse: collapse;">Could anyone explain this for me? I am pretty new with Wien2k.</span></font></div><div><font class="Apple-style-span" face="arial, sans-serif"><span class="Apple-style-span" style="border-collapse: collapse;"><br>
</span></font></div><div><font class="Apple-style-span" face="arial, sans-serif"><span class="Apple-style-span" style="border-collapse: collapse;">Thank you.</span></font></div><div><br><div class="gmail_quote">On Wed, Sep 30, 2009 at 3:12 AM, Peter Blaha <span dir="ltr"><<a href="mailto:pblaha@theochem.tuwien.ac.at">pblaha@theochem.tuwien.ac.at</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">Very unusual, I cannot believe that 3 or 7 nodes run efficiently (lapw1) or<br>
are necessary.<br>
Maybe memory is an issue and you should try to set<br>
<br>
lapw2_vector_split:2<br>
<br>
(with a even number of processors!)<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div></div><div class="h5">
I can run mpi with lapw0, lapw1, and lapw2. However, lapw2 can run without problem with certain number of PROCESSORS PER MPI JOB (in both cases: fully mpi and/or hybrid k-parallel+mpi). Those certain numbers are 3 and 7. If I try to run with other numbers of PROCESSORS PER MPI JOB, it gives me an message like below. This problem doesn't occur with lapw0 and lapw1. If any of you could give me some suggestion of fixing this problem, it would be appreciated.<br>
<br>
[compute-0-2.local:08162] *** An error occurred in MPI_Comm_split<br>
[compute-0-2.local:08162] *** on communicator MPI_COMM_WORLD<br>
[compute-0-2.local:08162] *** MPI_ERR_ARG: invalid argument of some other kind<br>
[compute-0-2.local:08162] *** MPI_ERRORS_ARE_FATAL (goodbye)<br>
forrtl: error (78): process killed (SIGTERM)<br>
Image PC Routine Line Source libpthread.so.0 000000383440DE80 Unknown Unknown Unknown<br>
........... etc....<br>
<br>
<br>
Reference:<br>
OPTIONS file:<br>
current:FOPT:-FR -mp1 -w -prec_div -pc80 -pad -align -DINTEL_VML -traceback<br>
current:FPOPT:$(FOPT)<br>
current:LDFLAGS:$(FOPT) -L/share/apps/fftw-3.2.1/lib/ -lfftw3 -L/share/apps/inte<br>
l/mkl/10.0.011/lib/em64t -i-static -openmp<br>
current:DPARALLEL:'-DParallel'<br>
current:R_LIBS:-lmkl_lapack -lmkl_core -lmkl_em64t -lguide -lpthread<br>
current:RP_LIBS:-lmkl_scalapack_lp64 -lmkl_solver_lp64_sequential -Wl,--start-gr<br>
oup -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lmkl_blacs_openmpi_lp64 -Wl,--<br>
end-group -lpthread -lmkl_em64t -L/share/apps/intel/fce/10.1.008/lib -limf<br>
current:MPIRUN:mpirun -np _NP_ -machinefile _HOSTS_ _EXEC_<br>
<br>
Openmpi 1.2.6<br>
Intel compiler 10<br>
<br>
.machines<br>
lapw0:compute-0-2:4<br>
1:compute-0-2:4<br>
granularity:1<br>
extrafine:1<br>
lapw2_vector_split:1<br>
<br>
--------------------------------------------------<br>
Duy Le<br>
PhD Student<br>
Department of Physics<br>
University of Central Florida.<br>
<br>
<br></div></div>
------------------------------------------------------------------------<div class="im"><br>
<br>
_______________________________________________<br>
Wien mailing list<br>
<a href="mailto:Wien@zeus.theochem.tuwien.ac.at" target="_blank">Wien@zeus.theochem.tuwien.ac.at</a><br>
<a href="http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien" target="_blank">http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien</a><br>
</div></blockquote><font color="#888888">
<br>
-- <br>
<br>
P.Blaha<br>
--------------------------------------------------------------------------<br>
Peter BLAHA, Inst.f. Materials Chemistry, TU Vienna, A-1060 Vienna<br>
Phone: +43-1-58801-15671 FAX: +43-1-58801-15698<br>
Email: <a href="mailto:blaha@theochem.tuwien.ac.at" target="_blank">blaha@theochem.tuwien.ac.at</a> WWW: <a href="http://info.tuwien.ac.at/theochem/" target="_blank">http://info.tuwien.ac.at/theochem/</a><br>
--------------------------------------------------------------------------</font><div><div></div><div class="h5"><br>
_______________________________________________<br>
Wien mailing list<br>
<a href="mailto:Wien@zeus.theochem.tuwien.ac.at" target="_blank">Wien@zeus.theochem.tuwien.ac.at</a><br>
<a href="http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien" target="_blank">http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien</a><br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br>--------------------------------------------------<br>Duy Le<br>PhD Student<br>Department of Physics<br>University of Central Florida.<br>
</div>