Dear Wien2k Users and Prof. Marks,<br><br>Thankyou very much for your reply. I am giving more information.<br>Wien2k Version: Wien2k_11.1 on a 8 processor server each has two nodes.<br>mkl library: 10.0.1.014<br>openmpi: 1.3<br>
fftw: 2.1.5<br><br>My OPTION file is as follows:<br>
<p class="MsoPlainText"><span style="font-family:"Courier New"">current:FOPT:-FR
-O3 -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML -traceback
-l/opt/openmpi/include<br>
current:FPOPT:-FR -mp1 -w -prec_div -pc80 -pad -ip -traceback<br>
current:LDFLAGS:-L/root/WIEN2k_11/SRC_lib
-L/opt/intel/cmkl/<a href="http://10.0.1.014/lib/em64t">10.0.1.014/lib/em64t</a> -lmkl_em64t -lmkl_blacs_openmpi_lp64
-lmkl_solver -lguide -lpthread -i-static<br>
current:DPARALLEL:'-DParallel'<br>
current:R_LIBS:-L/opt/intel/cmkl/<a href="http://10.0.1.014/lib/em64t">10.0.1.014/lib/em64t</a> -lmkl_scalapack_lp64
-lmkl_solver_lp64_sequential -Wl,--start-group -lmkl_intel_lp64
-lmkl_sequential -lmkl_core -lmkl_blacs_openmpi_lp64 -Wl,--end-group -lpthread
-lm -L/opt/openmpi/1.3/lib/ -lmpi_f90 -lmpi_f77 -lmpi -lopen-rte -lopen-pal
-ldl -Wl,--export-dynamic -lnsl -lutil -limf -L/opt/fftw-2.1.5/lib/lib/
-lfftw_mpi -lrfftw_mpi -lfftw -lrfftw<br>
current:RP_LIBS:-L/opt/intel/cmkl/<a href="http://10.0.1.014/lib/em64t">10.0.1.014/lib/em64t</a> -lmkl_scalapack_lp64
-lmkl_solver_lp64_sequential -Wl,--start-group -lmkl_intel_lp64
-lmkl_sequential -lmkl_core -lmkl_blacs_openmpi_lp64 -Wl,--end-group -lpthread
-lm -L/opt/openmpi/1.3/lib/ -lmpi_f90 -lmpi_f77 -lmpi -lopen-rte -lopen-pal
-ldl -Wl,--export-dynamic -lnsl -lutil -limf -L/opt/fftw-2.1.5/lib/lib/
-lfftw_mpi -lrfftw_mpi -lfftw -lrfftw<br>
current:MPIRUN:/opt/openmpi/1.3/bin/mpirun -v -n _NP_ _EXEC_<br>
</span></p>
My parallel_option file is as follows:<br>
<p class="MsoPlainText"><span style="font-family:"Courier New"">setenv USE_REMOTE
0<br>
setenv MPI_REMOTE 0<br>
setenv WIEN_GRANULARITY 1<br>
setenv WIEN_MPIRUN "/opt/openmpi/1.3/bin/mpirun -v -n _NP_ -machinefile
_HOSTS_ _EXEC_"<br>
</span></p>
On the compilation no error message was received and all the executable files are generated. I have edited parallel_option file, so now the error message is changed and it is as follows:<br>
<p class="MsoPlainText"><span style="font-family:"Courier New"">[arya:01254]
filem:rsh: copy(): Error: File type unknown<br>
ssh: cpu1: Name or service not known</span></p>
<p class="MsoPlainText"><span style="font-family:"Courier New"">--------------------------------------------------------------------------<br>
A daemon (pid 9385) died unexpectedly with status 255 while attempting<br>
to launch so we are aborting.<br>
<br>
There may be more information reported by the environment (see above).<br>
<br>
This may be because the daemon was unable to find all the needed shared<br>
libraries on the remote node. You may set your LD_LIBRARY_PATH to have
the<br>
location of the shared libraries on the remote nodes and this will<br>
automatically be forwarded to the remote nodes.<br>
--------------------------------------------------------------------------<br>
--------------------------------------------------------------------------<br>
mpirun noticed that the job aborted, but has no info as to the process<br>
that caused that situation.<br>
--------------------------------------------------------------------------<br>
ssh: cpu2: Name or service not known</span></p>
<p class="MsoPlainText"><span style="font-family:"Courier New"">ssh: cpu3: Name
or service not known</span></p>
<p class="MsoPlainText"><span style="font-family:"Courier New"">ssh: cpu4: Name
or service not known</span></p>
<p class="MsoPlainText"><span style="font-family:"Courier New"">mpirun: clean
termination accomplished<br>
<br>
LAPW1 - Error<br>
LAPW1 - Error<br>
LAPW1 - Error<br>
LAPW1 - Error<br>
LAPW1 - Error<br>
LAPW1 - Error<br>
LAPW1 - Error<br>
</span></p>
I have used the following .machines file for 16 k-points:<br>
<p class="MsoPlainText"><span style="font-family:"Courier New"">granularity:1<br>
1:cpu1<br>
1:cpu2<br>
1:cpu3<br>
1:cpu4<br>
1:cpu5<br>
1:cpu6<br>
1:cpu7<br>
1:cpu8<br>
1:cpu9<br>
1:cpu10<br>
1:cpu11<br>
1:cpu12<br>
1:cpu13<br>
1:cpu14<br>
1:cpu15<br>
1:cpu16<br>
extrafine:1<br>
lapw0: cpu1:1 cpu2:1 cpu3:1 cpu4:1<br>
</span></p>
Please any one suggest me the solution of this problem.<br><br>With kind regards,<br><br><br><div class="gmail_quote">On Mon, Jul 23, 2012 at 4:50 PM, Laurence Marks <span dir="ltr"><<a href="mailto:L-marks@northwestern.edu" target="_blank">L-marks@northwestern.edu</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><p>You probably have an incorrect MPIRUN environmental parameter. You have not provided enough information, and need to do a bit more analysis yourself.</p>
<p>---------------------------<br>
Professor Laurence Marks<br>
Department of Materials Science and Engineering<br>
Northwestern University<br>
<a href="http://www.numis.northwestern.edu" target="_blank">www.numis.northwestern.edu</a> 1-847-491-3996<br>
"Research is to see what everybody else has seen, and to think what nobody else has thought"<br>
Albert Szent-Gyorgi<br>
</p><div class="HOEnZb"><div class="h5">
<div class="gmail_quote">On Jul 23, 2012 6:17 AM, "alpa dashora" <<a href="mailto:dashoralpa@gmail.com" target="_blank">dashoralpa@gmail.com</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div>
Dear Wien2k Users,<br>
<br>
I recently installed Wien2k with openmpi on 16 processor server. Installation was completed without any compilation error. While running the run_lapw -p command, I received the following error:<br>
------------------------------------------------------------------------------------------------------------------------------<br>
<br>
mpirun was unable to launch the specified application as it could not find an executable:<br>
<br>
Executable:-4<br>
Node: arya<br>
<br>
while attempting to start process rank 0.<br>
-------------------------------------------------------------------------------------------------------------------------------<br>
<br>
Kindly suggest me the solution.<br>
mpirun is available in /opt/openmpi/1.3/bin<br>
<br>
Thank you in advance.<br>
<br>
Regards,<br>
<br>
-- <br>
Dr. Alpa Dashora<br>
</div>
</blockquote></div>
</div></div><br>_______________________________________________<br>
Wien mailing list<br>
<a href="mailto:Wien@zeus.theochem.tuwien.ac.at">Wien@zeus.theochem.tuwien.ac.at</a><br>
<a href="http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien" target="_blank">http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien</a><br>
<br></blockquote></div><br><br clear="all"><br>-- <br>Alpa Dashora<br>