<html><head><style type='text/css'>p { margin: 0; }</style></head><body><div style='font-family: Arial; font-size: 12pt; color: #000000'>Hello again,<div><br></div><div>I commented the line call W2kinit, and now I have a more descriptive message but I am still lost about it. Not sure if it's that its not finding some libraries or if is that the environments variables are not being propagated to all nodes.</div><div><br></div><div>forrtl: severe (174): SIGSEGV, segmentation fault occurred</div><div>Image PC Routine Line Source</div><div>libmpi.so.1 00002B746414FF7A Unknown Unknown Unknown</div><div>lapw1c_mpi 00000000004E9192 Unknown Unknown Unknown</div><div><span style="font-size: 12pt; ">libmkl_scalapack_ 00002B746330B231 Unknown Unknown Unknown</span> </div><div><br></div><div>Any ideas? I'm so sorry for all the questions</div><div><br></div><div>David Guzman</div><div><br></div><div><div style="font-family: Helvetica; font-size: medium; ">On May 11, 2013, at 4:46 PM, Laurence Marks <L-marks@northwestern.edu> wrote:</div><br class="Apple-interchange-newline" style="font-family: Helvetica; font-size: medium; "><blockquote type="cite" style="font-family: Helvetica; font-size: medium; ">The addition of the signal trapping in Wien2k (W2kinit in lapw[0-2].F<br>and others) has a plus, and a minus. The pluses are that the weekly<br>emails on the list about ulimit associated crashes, and also (perhaps<br>not so obvious) that mpi tasks die more gracefully. Unfortunately it<br>also can make knowing what is wrong with an mpi job less than clear.<br><br>I suggest (and others should do the same as needed) that you comment<br>out the "call W2kinit" in lapw1, recompile just lapw1 then try again<br>-- hopefully you will get a more human understandable message.<br>IMPORTANT: check hansen-b004 to ensure that you do not have any zombie<br>processes still running; depending upon what version of ssh you are<br>running you may have them hanging around.<br><br>On Sat, May 11, 2013 at 3:31 PM, alonofrio@comcast.net<br><alonofrio@comcast.net> wrote:<br><blockquote type="cite">Thanks Professor Marks,<br>I corrected my compiler option, I had a mistake with the openmpi blacs<br>library. However I still get errors when trying to run. It always stops when<br>it starts lapw1.<br>Now is giving me this error<br>w2k_dispatch_signal(): received: Segmentation fault<br>--------------------------------------------------------------------------<br>MPI_ABORT was invoked on rank 3 in communicator MPI_COMM_WORLD<br>with errorcode 0.<br><br>NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.<br>You may or may not see output from other processes, depending on<br>exactly when Open MPI kills them.<br>--------------------------------------------------------------------------<br>and when I look at the case.dayfile I see this:<br><br>hansen-b004 hansen-b004 hansen-b004 hansen-b004(5) Child id 0<br>SIGSEGV, contact developers<br>Child id 1 SIGSEGV, contact developers<br>Child id 2 SIGSEGV, contact developers<br>Child id 3 SIGSEGV, contact developers<br>0.085u 0.365s 0:01.49 29.5% 0+0k 0+0io 57pf+0w<br><br>Thanks for your help. Any comments are well appreciated.<br><br><blockquote type="cite">Alex Onofrio<br>Departamento de Fisica<br>Univesidad de Los Andes<br>Bogota, Colombia<br></blockquote><br><br>On May 11, 2013, at 1:43 PM, Laurence Marks <L-marks@northwestern.edu><br>wrote:<br><br>You need to use the openmpi blacs. Please check the intel compilation<br>assistance webpage (previously posted, so check the list).<br><br>On May 11, 2013 12:10 PM, "alonofrio@comcast.net" <alonofrio@comcast.net><br>wrote:<br><blockquote type="cite"><br>Wien2k User<br><br>I am trying to get the MPI capabilities of Wien running, but I got into<br>some complication.<br>The whole compilation process goes fine with no errors, but when I try to<br>run the code through run_lapw it stops at the begining of the lapw1 program<br>with the following error:<br><br>w2k_dispatch_signal(): received: Segmentation fault<br>MPI_ABORT was invoked on rank 7 in communicator MPI_COMM_WORLD<br>with errorcode 28607820.<br><br>NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.<br>You may or may not see output from other processes, depending on<br>exactly when Open MPI kills them.<br>--------------------------------------------------------------------------<br>--------------------------------------------------------------------------<br>mpirun has exited due to process rank 7 with PID 60685 on<br>node carter-a355.rcac.purdue.edu exiting without calling "finalize". This<br>may<br>have caused other processes in the application to be<br>terminated by signals sent by mpirun (as reported here).<br><br>This repeats the same number of times as the number of processors<br>submitted as mpi jobs.<br><br>Here are my complilation options as shown in the OPTIONS file:<br><br>current:FOPT:-FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML -traceback<br>current:FPOPT:-FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML -traceback<br>current:LDFLAGS:$(FOPT) -L$(MKLROOT)/lib/em64t -pthread<br>current:DPARALLEL:'-DParallel'<br>current:R_LIBS:-lmkl_blas95_lp64 -lmkl_lapack95_lp64<br>$(MKLROOT)/lib/em64t/libmkl<br>_scalapack_lp64.a -Wl,--start-group<br>$(MKLROOT)/lib/em64t/libmkl_cdft_core.a $(MK<br>LROOT)/lib/em64t/libmkl_intel_lp64.a<br>$(MKLROOT)/lib/em64t/libmkl_intel_thread.a<br>$(MKLROOT)/lib/em64t/libmkl_core.a<br>$(MKLROOT)/lib/em64t/libmkl_blacs_intelmpi_lp<br>64.a -Wl,--end-group -openmp -lpthread<br>current:RP_LIBS:-lmkl_scalapack_lp64 -lmkl_solver_lp64 -lmkl_blacs_lp64<br>-L/apps/<br>rhel6/fftw-3.3.1/openmpi-1.4.4_intel-12.0.084/lib -lfftw3_mpi -lfftw3<br>$(R_LIBS)<br>current:MPIRUN:mpirun -np _NP_ -machinefile _HOSTS_ _EXEC_<br><br>and these are the options in the parallel_options file:<br>setenv USE_REMOTE 1<br>setenv MPI_REMOTE 0<br>setenv WIEN_GRANULARITY 1<br>setenv WIEN_MPIRUN "mpirun -x LD_LIBRARY_PATH -x PATH -np _NP_<br>-hostfile _HOSTS_ _EXEC_"<br><br>I compiled the code with intel 12.0.084, openmpi 1.4.4 (compiled with<br>intel 12.0.084) and fftw 3.3.1 (compiled with intel 12.0.084 and openmpi<br>1.4.4.<br>I am trying to run the code in the university cluster which has infiniband<br>and intel xeon-E5.<br><br>I hope this information is enough for any of you to point me to the<br>problem.<br><br>Thanks so much for your time<br><br>Alex Onofrio<br>Departamento de Fisica<br>Univesidad de Los Andes<br>Bogota, Colombia<br><br><br><br></blockquote>_______________________________________________<br>Wien mailing list<br>Wien@zeus.theochem.tuwien.ac.at<br>http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien<br>SEARCH the MAILING-LIST at:<br>http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html<br><br><br></blockquote><br><br><br>-- <br>Professor Laurence Marks<br>Department of Materials Science and Engineering<br>Northwestern University<br>www.numis.northwestern.edu 1-847-491-3996<br>"Research is to see what everybody else has seen, and to think what<br>nobody else has thought"<br>Albert Szent-Gyorgi<br>_______________________________________________<br>Wien mailing list<br>Wien@zeus.theochem.tuwien.ac.at<br>http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien<br>SEARCH the MAILING-LIST at: http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html</blockquote></div></div></body></html>