<html><head><style type='text/css'>p { margin: 0; }</style></head><body><div style='font-family: Arial; font-size: 12pt; color: #000000'><div>Thanks Professor Marks,</div><div>I corrected my compiler option, I had a mistake with the openmpi blacs library. However I still get errors when trying to run. It always stops when it starts lapw1.</div><div>Now is giving me this error</div><div><div>w2k_dispatch_signal(): received: Segmentation fault</div><div>--------------------------------------------------------------------------</div><div>MPI_ABORT was invoked on rank 3 in communicator MPI_COMM_WORLD</div><div>with errorcode 0.</div><div><br></div><div>NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.</div><div>You may or may not see output from other processes, depending on</div><div>exactly when Open MPI kills them.</div><div>--------------------------------------------------------------------------</div></div><div>and when I look at the case.dayfile I see this:</div><div><br></div><div><div>hansen-b004 hansen-b004 hansen-b004 hansen-b004(5) Child id 0 SIGSEGV, contact developers</div><div> Child id 1 SIGSEGV, contact developers</div><div> Child id 2 SIGSEGV, contact developers</div><div> Child id 3 SIGSEGV, contact developers</div><div>0.085u 0.365s 0:01.49 29.5% 0+0k 0+0io 57pf+0w</div></div><div><br></div><div>Thanks for your help. Any comments are well appreciated. </div><div><br></div><div><blockquote type="cite" style="font-size: medium; font-family: Helvetica; "><div class="gmail_quote"><blockquote class="gmail_quote" style="margin: 0px 0px 0px 0.8ex; border-left-width: 1px; border-left-color: rgb(204, 204, 204); border-left-style: solid; padding-left: 1ex; "><div style="font-size: 12pt; font-family: Arial; "><div>Alex Onofrio</div><div>Departamento de Fisica</div><div>Univesidad de Los Andes</div><div>Bogota, Colombia</div></div></blockquote></div></blockquote></div><br><div><div style="font-family: Helvetica; font-size: medium; ">On May 11, 2013, at 1:43 PM, Laurence Marks <L-marks@northwestern.edu> wrote:</div><br class="Apple-interchange-newline" style="font-family: Helvetica; font-size: medium; "><blockquote type="cite" style="font-family: Helvetica; font-size: medium; "><p>You need to use the openmpi blacs. Please check the intel compilation assistance webpage (previously posted, so check the list).</p><div class="gmail_quote">On May 11, 2013 12:10 PM, "<a href="mailto:alonofrio@comcast.net">alonofrio@comcast.net</a>" <<a href="mailto:alonofrio@comcast.net">alonofrio@comcast.net</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin: 0px 0px 0px 0.8ex; border-left-width: 1px; border-left-color: rgb(204, 204, 204); border-left-style: solid; padding-left: 1ex; "><div style="font-size: 12pt; font-family: Arial; ">Wien2k User<div><br></div><div>I am trying to get the MPI capabilities of Wien running, but I got into some complication. </div><div>The whole compilation process goes fine with no errors, but when I try to run the code through run_lapw it stops at the begining of the lapw1 program with the following error:</div><div><br></div><div>w2k_dispatch_signal(): received: Segmentation fault</div><div><div>MPI_ABORT was invoked on rank 7 in communicator MPI_COMM_WORLD</div><div>with errorcode 28607820.</div><div><br></div><div>NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.</div><div>You may or may not see output from other processes, depending on</div><div>exactly when Open MPI kills them.</div><div>--------------------------------------------------------------------------</div><div>--------------------------------------------------------------------------</div><div>mpirun has exited due to process rank 7 with PID 60685 on</div><div>node <a href="http://carter-a355.rcac.purdue.edu/" target="_blank">carter-a355.rcac.purdue.edu</a> exiting without calling "finalize". This may</div><div>have caused other processes in the application to be</div><div>terminated by signals sent by mpirun (as reported here).</div></div><div><br></div><div>This repeats the same number of times as the number of processors submitted as mpi jobs.</div><div><br></div><div>Here are my complilation options as shown in the OPTIONS file:</div><div><br></div><div><div>current:FOPT:-FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML -traceback</div><div>current:FPOPT:-FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML -traceback</div><div>current:LDFLAGS:$(FOPT) -L$(MKLROOT)/lib/em64t -pthread</div><div>current:DPARALLEL:'-DParallel'</div><div>current:R_LIBS:-lmkl_blas95_lp64 -lmkl_lapack95_lp64 $(MKLROOT)/lib/em64t/libmkl</div><div>_scalapack_lp64.a -Wl,--start-group $(MKLROOT)/lib/em64t/libmkl_cdft_core.a $(MK</div><div>LROOT)/lib/em64t/libmkl_intel_lp64.a $(MKLROOT)/lib/em64t/libmkl_intel_thread.a</div><div>$(MKLROOT)/lib/em64t/libmkl_core.a $(MKLROOT)/lib/em64t/libmkl_blacs_intelmpi_lp</div><div>64.a -Wl,--end-group -openmp -lpthread</div><div>current:RP_LIBS:-lmkl_scalapack_lp64 -lmkl_solver_lp64 -lmkl_blacs_lp64 -L/apps/</div><div>rhel6/fftw-3.3.1/openmpi-1.4.4_intel-12.0.084/lib -lfftw3_mpi -lfftw3 $(R_LIBS)</div><div>current:MPIRUN:mpirun -np _NP_ -machinefile _HOSTS_ _EXEC_</div></div><div><br></div><div>and these are the options in the parallel_options file:</div><div><div>setenv USE_REMOTE 1</div><div>setenv MPI_REMOTE 0</div><div>setenv WIEN_GRANULARITY 1</div><div>setenv WIEN_MPIRUN "mpirun -x LD_LIBRARY_PATH -x PATH -np _NP_</div><div>-hostfile _HOSTS_ _EXEC_"</div></div><div><br></div><div>I compiled the code with intel 12.0.084, openmpi 1.4.4 (compiled with intel 12.0.084) and fftw 3.3.1 (compiled with intel 12.0.084 and openmpi 1.4.4.</div><div>I am trying to run the code in the university cluster which has infiniband and intel xeon-E5.</div><div><br></div><div>I hope this information is enough for any of you to point me to the problem.</div><div><br></div><div>Thanks so much for your time</div><div><br></div><div>Alex Onofrio</div><div>Departamento de Fisica</div><div>Univesidad de Los Andes</div><div>Bogota, Colombia</div><div><br></div><div><br></div><div><br></div></div></blockquote></div>_______________________________________________<br>Wien mailing list<br>Wien@zeus.theochem.tuwien.ac.at<br>http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien<br>SEARCH the MAILING-LIST at: http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html<br></blockquote></div><div><br></div></div></body></html>