[Wien] MPI run problem

Laurence Marks L-marks at northwestern.edu
Sat May 11 19:43:07 CEST 2013


You need to use the openmpi blacs. Please check the intel compilation
assistance webpage (previously posted, so check the list).
On May 11, 2013 12:10 PM, "alonofrio at comcast.net" <alonofrio at comcast.net>
wrote:

>  Wien2k User
>
>  I am trying to get the MPI capabilities of Wien running, but I got into
> some complication.
> The whole compilation process goes fine with no errors, but when I try to
> run the code through run_lapw it stops at the begining of the lapw1 program
> with the following error:
>
>  w2k_dispatch_signal(): received: Segmentation fault
>  MPI_ABORT was invoked on rank 7 in communicator MPI_COMM_WORLD
> with errorcode 28607820.
>
>  NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> You may or may not see output from other processes, depending on
> exactly when Open MPI kills them.
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> mpirun has exited due to process rank 7 with PID 60685 on
> node carter-a355.rcac.purdue.edu exiting without calling "finalize". This
> may
> have caused other processes in the application to be
> terminated by signals sent by mpirun (as reported here).
>
>  This repeats the same number of times as the number of processors
> submitted as mpi jobs.
>
>  Here are my complilation options as shown in the OPTIONS file:
>
>  current:FOPT:-FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML -traceback
> current:FPOPT:-FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML -traceback
> current:LDFLAGS:$(FOPT) -L$(MKLROOT)/lib/em64t -pthread
> current:DPARALLEL:'-DParallel'
> current:R_LIBS:-lmkl_blas95_lp64 -lmkl_lapack95_lp64
> $(MKLROOT)/lib/em64t/libmkl
> _scalapack_lp64.a -Wl,--start-group
> $(MKLROOT)/lib/em64t/libmkl_cdft_core.a $(MK
> LROOT)/lib/em64t/libmkl_intel_lp64.a
> $(MKLROOT)/lib/em64t/libmkl_intel_thread.a
> $(MKLROOT)/lib/em64t/libmkl_core.a
> $(MKLROOT)/lib/em64t/libmkl_blacs_intelmpi_lp
> 64.a -Wl,--end-group -openmp -lpthread
> current:RP_LIBS:-lmkl_scalapack_lp64 -lmkl_solver_lp64 -lmkl_blacs_lp64
> -L/apps/
> rhel6/fftw-3.3.1/openmpi-1.4.4_intel-12.0.084/lib -lfftw3_mpi -lfftw3
> $(R_LIBS)
> current:MPIRUN:mpirun -np _NP_ -machinefile _HOSTS_ _EXEC_
>
>  and these are the options in the parallel_options file:
>  setenv USE_REMOTE 1
> setenv MPI_REMOTE 0
> setenv WIEN_GRANULARITY 1
> setenv WIEN_MPIRUN "mpirun -x LD_LIBRARY_PATH -x PATH -np _NP_
> -hostfile _HOSTS_ _EXEC_"
>
>  I compiled the code with intel 12.0.084, openmpi 1.4.4 (compiled with
> intel 12.0.084) and fftw 3.3.1 (compiled with intel 12.0.084 and openmpi
> 1.4.4.
> I am trying to run the code in the university cluster which has infiniband
> and intel xeon-E5.
>
>  I hope this information is enough for any of you to point me to the
> problem.
>
>  Thanks so much for your time
>
>  Alex Onofrio
> Departamento de Fisica
> Univesidad de Los Andes
> Bogota, Colombia
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://zeus.theochem.tuwien.ac.at/pipermail/wien/attachments/20130511/12f61a8d/attachment.htm>


More information about the Wien mailing list