Hi Arturo,<div><br></div><div>   Actually, Prof. Marks were right !!  :)</div><div>   All the best,</div><div>                    Luis</div><div><br></div><div><br></div><div><br><br><div class="gmail_quote">2013/2/8 Arturo <span dir="ltr">&lt;<a href="mailto:artginer@bifi.es" target="_blank">artginer@bifi.es</a>&gt;</span><br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
  
    
  
  <div bgcolor="#FFFFFF" text="#000000">
    <div>Hi Luis,<br>
      <br>
      You were right. The problem was in the parallel options. I had to
      change this library and also recompile fftw3_mpi library with the
      correct openmpi compiler.<br>
      <br>
      Regards!!<br>
      Arturo<br>
      <br>
      El 06/02/13 14:04, Luis Ogando escribió:<br>
    </div><div><div class="h5">
    <blockquote type="cite">Hi Arturo,
      <div><br>
      </div>
      <div>   You have to check the compilation options for the parallel
        version (not the serial one) and change    -lmkl_blacs_lp64  
         by    -lmkl_blacs_openmpi_lp64    in the  
         RP_LIB(SCALAPACK+PBLAS):    line.</div>
      <div>    Good luck,</div>
      <div>                Luis</div>
      <div><br>
      </div>
      <div><br>
      </div>
      <div><br>
      </div>
      <div><br>
        <br>
        <div class="gmail_quote">2013/2/6 Arturo <span dir="ltr">&lt;<a href="mailto:artginer@bifi.es" target="_blank">artginer@bifi.es</a>&gt;</span><br>
          <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
            <div bgcolor="#FFFFFF" text="#000000">
              <div>Hi Luis,<br>
                <br>
                First of all, thanks for help!! <br>
                <br>
                In the first suggestion, how can I know if I&#39;m using
                this library? <br>
                In the compilations options, I don&#39;t use that library. I
                used:<br>
                <br>
                    -lmkl_lapack95_lp64 -lmkl_intel_lp64
                -lmkl_intel_thread -lmkl_core -openmp -lpthread<br>
                <br>
                So the solution you suggest is to add the library
                mkl_blacs_openmpi_lp64 to this line?<br>
                 <br>
                Regards<br>
                <br>
                <br>
                El 06/02/13 13:32, Luis Ogando escribió:<br>
              </div>
              <div>
                <div>
                  <blockquote type="cite">Hi Arturo,
                    <div><br>
                    </div>
                    <div>    Try     <a href="http://zeus.theochem.tuwien.ac.at/pipermail/wien/2012-July/017207.html" target="_blank">http://zeus.theochem.tuwien.ac.at/pipermail/wien/2012-July/017207.html</a>  </div>
                    <div>    </div>
                    <div>     The first suggestion given by Prof. Marks
                      solved a similar problem to me.</div>
                    <div>    All the best,</div>
                    <div>                  Luis</div>
                    <div><br>
                    </div>
                    <div><br>
                    </div>
                    <div><br>
                    </div>
                    <div><br>
                      <br>
                      <div class="gmail_quote"> 2013/2/6 Arturo <span dir="ltr">&lt;<a href="mailto:artginer@bifi.es" target="_blank">artginer@bifi.es</a>&gt;</span><br>
                        <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
                          <div bgcolor="#FFFFFF" text="#000000"> Hi,<br>
                            <br>
                            We are trying  to run wien2k v12, compiled
                            with openmpi 1.5.4 and intel compilers 12.1
                            but it gives us a segmentations fault while
                            executing the following:<br>
                            <br>
                            mpirun  -np 4 
                            /cm/shared/apps/WIEN2k_12/lapw1_mpi 
                            uplapw1_1.defcat<br>
                            <br>
                            The errors are these:<br>
                            <small><br>
                              w2k_dispatch_signal(): received:
                              Segmentation fault<br>
                              w2k_dispatch_signal(): received:
                              Segmentation fault<br>
                              w2k_dispatch_signal(): received:
                              Segmentation fault<br>
                              w2k_dispatch_signal(): received:
                              Segmentation fault<br>
--------------------------------------------------------------------------<br>
                              MPI_ABORT was invoked on rank 1 in
                              communicator MPI_COMM_WORLD <br>
                              with errorcode 91.<br>
                              <br>
                              NOTE: invoking MPI_ABORT causes Open MPI
                              to kill all MPI processes.<br>
                              You may or may not see output from other
                              processes, depending on<br>
                              exactly when Open MPI kills them.<br>
--------------------------------------------------------------------------<br>
                               Child id           0 SIGSEGV, contact
                              developers<br>
                               Child id           1 SIGSEGV, contact
                              developers<br>
                               Child id           2 SIGSEGV, contact
                              developers<br>
                               Child id           3 SIGSEGV, contact
                              developers<br>
--------------------------------------------------------------------------<br>
                              mpirun has exited due to process rank 2
                              with PID 25559 on<br>
                              node node042 exiting without calling
                              &quot;finalize&quot;. This may<br>
                              have caused other processes in the
                              application to be<br>
                              terminated by signals sent by mpirun (as
                              reported here).<br>
--------------------------------------------------------------------------<br>
                              [node042:25556] 3 more processes have sent
                              help message help-mpi-api.txt / mpi-abort<br>
                              [node042:25556] Set MCA parameter
                              &quot;orte_base_help_aggregate&quot; to 0 to see all
                              help / error messages</small><br>
                            <br>
                            And the ulimit -s is unlimited.<br>
                            <br>
                            Could you help us executing wien2k in
                            parallel mode? Sequential is working fine.<br>
                            <br>
                            Best Regards<span><font color="#888888"><br>
                                Arturo<br>
                                <br>
                              </font></span></div>
                          <br>
_______________________________________________<br>
                          Wien mailing list<br>
                          <a href="mailto:Wien@zeus.theochem.tuwien.ac.at" target="_blank">Wien@zeus.theochem.tuwien.ac.at</a><br>
                          <a href="http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien" target="_blank">http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien</a><br>
                          <br>
                        </blockquote>
                      </div>
                      <br>
                    </div>
                    <br>
                    <fieldset></fieldset>
                    <br>
                    <pre>_______________________________________________
Wien mailing list
<a href="mailto:Wien@zeus.theochem.tuwien.ac.at" target="_blank">Wien@zeus.theochem.tuwien.ac.at</a>
<a href="http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien" target="_blank">http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien</a>
</pre>
                  </blockquote>
                </div>
              </div>
            </div>
            <br>
            _______________________________________________<br>
            Wien mailing list<br>
            <a href="mailto:Wien@zeus.theochem.tuwien.ac.at" target="_blank">Wien@zeus.theochem.tuwien.ac.at</a><br>
            <a href="http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien" target="_blank">http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien</a><br>
            <br>
          </blockquote>
        </div>
        <br>
      </div>
      <br>
      <fieldset></fieldset>
      <br>
      <pre>_______________________________________________
Wien mailing list
<a href="mailto:Wien@zeus.theochem.tuwien.ac.at" target="_blank">Wien@zeus.theochem.tuwien.ac.at</a>
<a href="http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien" target="_blank">http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien</a>
</pre>
    </blockquote>
  </div></div></div>

<br>_______________________________________________<br>
Wien mailing list<br>
<a href="mailto:Wien@zeus.theochem.tuwien.ac.at">Wien@zeus.theochem.tuwien.ac.at</a><br>
<a href="http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien" target="_blank">http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien</a><br>
<br></blockquote></div><br></div>