[Wien] MPI run problem

alonofrio at comcast.net alonofrio at comcast.net
Sat May 11 22:31:28 CEST 2013


Thanks Professor Marks, 
I corrected my compiler option, I had a mistake with the openmpi blacs library. However I still get errors when trying to run. It always stops when it starts lapw1. 
Now is giving me this error 

w2k_dispatch_signal(): received: Segmentation fault 
-------------------------------------------------------------------------- 
MPI_ABORT was invoked on rank 3 in communicator MPI_COMM_WORLD 
with errorcode 0. 


NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. 
You may or may not see output from other processes, depending on 
exactly when Open MPI kills them. 
-------------------------------------------------------------------------- 
and when I look at the case.dayfile I see this: 



hansen-b004 hansen-b004 hansen-b004 hansen-b004(5) Child id 0 SIGSEGV, contact developers 
Child id 1 SIGSEGV, contact developers 
Child id 2 SIGSEGV, contact developers 
Child id 3 SIGSEGV, contact developers 
0.085u 0.365s 0:01.49 29.5% 0+0k 0+0io 57pf+0w 


Thanks for your help. Any comments are well appreciated. 






<blockquote>


Alex Onofrio 
Departamento de Fisica 
Univesidad de Los Andes 
Bogota, Colombia 


</blockquote>



On May 11, 2013, at 1:43 PM, Laurence Marks <L-marks at northwestern.edu> wrote: 

<blockquote>


You need to use the openmpi blacs. Please check the intel compilation assistance webpage (previously posted, so check the list). 
On May 11, 2013 12:10 PM, " alonofrio at comcast.net " < alonofrio at comcast.net > wrote: 

<blockquote>

Wien2k User 


I am trying to get the MPI capabilities of Wien running, but I got into some complication. 
The whole compilation process goes fine with no errors, but when I try to run the code through run_lapw it stops at the begining of the lapw1 program with the following error: 


w2k_dispatch_signal(): received: Segmentation fault 

MPI_ABORT was invoked on rank 7 in communicator MPI_COMM_WORLD 
with errorcode 28607820. 


NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. 
You may or may not see output from other processes, depending on 
exactly when Open MPI kills them. 
-------------------------------------------------------------------------- 
-------------------------------------------------------------------------- 
mpirun has exited due to process rank 7 with PID 60685 on 
node carter-a355.rcac.purdue.edu exiting without calling "finalize". This may 
have caused other processes in the application to be 
terminated by signals sent by mpirun (as reported here). 


This repeats the same number of times as the number of processors submitted as mpi jobs. 


Here are my complilation options as shown in the OPTIONS file: 



current:FOPT:-FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML -traceback 
current:FPOPT:-FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML -traceback 
current:LDFLAGS:$(FOPT) -L$(MKLROOT)/lib/em64t -pthread 
current:DPARALLEL:'-DParallel' 
current:R_LIBS:-lmkl_blas95_lp64 -lmkl_lapack95_lp64 $(MKLROOT)/lib/em64t/libmkl 
_scalapack_lp64.a -Wl,--start-group $(MKLROOT)/lib/em64t/libmkl_cdft_core.a $(MK 
LROOT)/lib/em64t/libmkl_intel_lp64.a $(MKLROOT)/lib/em64t/libmkl_intel_thread.a 
$(MKLROOT)/lib/em64t/libmkl_core.a $(MKLROOT)/lib/em64t/libmkl_blacs_intelmpi_lp 
64.a -Wl,--end-group -openmp -lpthread 
current:RP_LIBS:-lmkl_scalapack_lp64 -lmkl_solver_lp64 -lmkl_blacs_lp64 -L/apps/ 
rhel6/fftw-3.3.1/openmpi-1.4.4_intel-12.0.084/lib -lfftw3_mpi -lfftw3 $(R_LIBS) 
current:MPIRUN:mpirun -np _NP_ -machinefile _HOSTS_ _EXEC_ 


and these are the options in the parallel_options file: 

setenv USE_REMOTE 1 
setenv MPI_REMOTE 0 
setenv WIEN_GRANULARITY 1 
setenv WIEN_MPIRUN "mpirun -x LD_LIBRARY_PATH -x PATH -np _NP_ 
-hostfile _HOSTS_ _EXEC_" 


I compiled the code with intel 12.0.084, openmpi 1.4.4 (compiled with intel 12.0.084) and fftw 3.3.1 (compiled with intel 12.0.084 and openmpi 1.4.4. 
I am trying to run the code in the university cluster which has infiniband and intel xeon-E5. 


I hope this information is enough for any of you to point me to the problem. 


Thanks so much for your time 


Alex Onofrio 
Departamento de Fisica 
Univesidad de Los Andes 
Bogota, Colombia 






</blockquote>
_______________________________________________ 
Wien mailing list 
Wien at zeus.theochem.tuwien.ac.at 
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien 
SEARCH the MAILING-LIST at: http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html 

</blockquote>


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://zeus.theochem.tuwien.ac.at/pipermail/wien/attachments/20130511/f39b374f/attachment.htm>


More information about the Wien mailing list