<p>You need to use the openmpi blacs. Please check the intel compilation assistance webpage (previously posted, so check the list).</p>
<div class="gmail_quote">On May 11, 2013 12:10 PM, "<a href="mailto:alonofrio@comcast.net">alonofrio@comcast.net</a>" <<a href="mailto:alonofrio@comcast.net">alonofrio@comcast.net</a>> wrote:<br type="attribution">
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div>
<div style="font-size:12pt;font-family:Arial">Wien2k User
<div><br>
</div>
<div>I am trying to get the MPI capabilities of Wien running, but I got into some complication. </div>
<div>The whole compilation process goes fine with no errors, but when I try to run the code through run_lapw it stops at the begining of the lapw1 program with the following error:</div>
<div><br>
</div>
<div>w2k_dispatch_signal(): received: Segmentation fault</div>
<div>
<div>MPI_ABORT was invoked on rank 7 in communicator MPI_COMM_WORLD</div>
<div>with errorcode 28607820.</div>
<div><br>
</div>
<div>NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.</div>
<div>You may or may not see output from other processes, depending on</div>
<div>exactly when Open MPI kills them.</div>
<div>--------------------------------------------------------------------------</div>
<div>--------------------------------------------------------------------------</div>
<div>mpirun has exited due to process rank 7 with PID 60685 on</div>
<div>node <a href="http://carter-a355.rcac.purdue.edu" target="_blank">carter-a355.rcac.purdue.edu</a> exiting without calling "finalize". This may</div>
<div>have caused other processes in the application to be</div>
<div>terminated by signals sent by mpirun (as reported here).</div>
</div>
<div><br>
</div>
<div>This repeats the same number of times as the number of processors submitted as mpi jobs.</div>
<div><br>
</div>
<div>Here are my complilation options as shown in the OPTIONS file:</div>
<div><br>
</div>
<div>
<div>current:FOPT:-FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML -traceback</div>
<div>current:FPOPT:-FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML -traceback</div>
<div>current:LDFLAGS:$(FOPT) -L$(MKLROOT)/lib/em64t -pthread</div>
<div>current:DPARALLEL:'-DParallel'</div>
<div>current:R_LIBS:-lmkl_blas95_lp64 -lmkl_lapack95_lp64 $(MKLROOT)/lib/em64t/libmkl</div>
<div>_scalapack_lp64.a -Wl,--start-group $(MKLROOT)/lib/em64t/libmkl_cdft_core.a $(MK</div>
<div>LROOT)/lib/em64t/libmkl_intel_lp64.a $(MKLROOT)/lib/em64t/libmkl_intel_thread.a</div>
<div>$(MKLROOT)/lib/em64t/libmkl_core.a $(MKLROOT)/lib/em64t/libmkl_blacs_intelmpi_lp</div>
<div>64.a -Wl,--end-group -openmp -lpthread</div>
<div>current:RP_LIBS:-lmkl_scalapack_lp64 -lmkl_solver_lp64 -lmkl_blacs_lp64 -L/apps/</div>
<div>rhel6/fftw-3.3.1/openmpi-1.4.4_intel-12.0.084/lib -lfftw3_mpi -lfftw3 $(R_LIBS)</div>
<div>current:MPIRUN:mpirun -np _NP_ -machinefile _HOSTS_ _EXEC_</div>
</div>
<div><br>
</div>
<div>and these are the options in the parallel_options file:</div>
<div>
<div>setenv USE_REMOTE 1</div>
<div>setenv MPI_REMOTE 0</div>
<div>setenv WIEN_GRANULARITY 1</div>
<div>setenv WIEN_MPIRUN "mpirun -x LD_LIBRARY_PATH -x PATH -np _NP_</div>
<div>-hostfile _HOSTS_ _EXEC_"</div>
</div>
<div><br>
</div>
<div>I compiled the code with intel 12.0.084, openmpi 1.4.4 (compiled with intel 12.0.084) and fftw 3.3.1 (compiled with intel 12.0.084 and openmpi 1.4.4.</div>
<div>I am trying to run the code in the university cluster which has infiniband and intel xeon-E5.</div>
<div><br>
</div>
<div>I hope this information is enough for any of you to point me to the problem.</div>
<div><br>
</div>
<div>Thanks so much for your time</div>
<div><br>
</div>
<div>Alex Onofrio</div>
<div>Departamento de Fisica</div>
<div>Univesidad de Los Andes</div>
<div>Bogota, Colombia</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
</div>
</div>
</blockquote></div>