[Wien] lapw1_mpi error
Bertoni Giovanni
giovanni at cemes.fr
Tue Jan 6 09:58:32 CET 2004
Dear Peter Blaha,
and all Wien2k users
I have problem with lapw1_mpi
I've used the old version moduls.F_old and hns.F_old as suggested in the
mails between Florent and Blaha (below), but I have this erron for
lapw1_mpi (using 4 processor, that means
1:localhost:4
in .machines
lapw1 crash and I have this error
x lapw1 -p
starting parallel lapw1 at Tue Dec 16 21:20:51 MET 2003
-> starting parallel LAPW1 jobs at Tue Dec 16 21:20:51 MET 2003
running LAPW1 in parallel mode (using .machines)
this is the output
>x lapw1 -p
1 number_of_parallel_jobs
[1] 9720604
MPI has run out of internal group entries.
Please set the environment variable MPI_GROUP_MAX for additional space.
MPI has run out of internal group entries.
Please set the environment variable MPI_GROUP_MAX for additional space.
The current value of MPI_GROUP_MAX is 32
MPI has run out of internal group entries.
Please set the environment variable MPI_GROUP_MAX for additional space.
The current value of MPI_GROUP_MAX is 32
MPI has run out of internal group entries.
Please set the environment variable MPI_GROUP_MAX for additional space.
The current value of MPI_GROUP_MAX is 32
The current value of MPI_GROUP_MAX is 32
MPI: MPI_COMM_WORLD rank 2 has terminated without calling MPI_Finalize()
[1] + Done ( cd $PWD; $t $ttt; rm -f
.lock_$lockfile[$p] ) >> .time1_$loop
[1] 9724451
MPI has run out of internal group entries.
Please set the environment variable MPI_GROUP_MAX for additional space.
The current value of MPI_GROUP_MAX is 32
MPI has run out of internal group entries.
Please set the environment variable MPI_GROUP_MAX for additional space.
The current value of MPI_GROUP_MAX is 32
MPI has run out of internal group entries.
Please set the environment variable MPI_GROUP_MAX for additional space.
The current value of MPI_GROUP_MAX is 32
MPI has run out of internal group entries.
Please set the environment variable MPI_GROUP_MAX for additional space.
The current value of MPI_GROUP_MAX is 32
MPI: MPI_COMM_WORLD rank 0 has terminated without calling MPI_Finalize()
[1] + Done ( cd $PWD; $t $ttt; rm -f
.lock_$lockfile[$p] ) >> .time1_$loop
[1] 9723143
MPI has run out of internal group entries.
Please set the environment variable MPI_GROUP_MAX for additional space.
The current value of MPI_GROUP_MAX is 32
MPI has run out of internal group entries.
Please set the environment variable MPI_GROUP_MAX for additional space.
The current value of MPI_GROUP_MAX is 32
MPI has run out of internal group entries.
Please set the environment variable MPI_GROUP_MAX for additional space.
The current value of MPI_GROUP_MAX is 32
MPI has run out of internal group entries.
Please set the environment variable MPI_GROUP_MAX for additional space.
The current value of MPI_GROUP_MAX is 32
MPI: MPI_COMM_WORLD rank 1 has terminated without calling MPI_Finalize()
[1] + Done ( cd $PWD; $t $ttt; rm -f
.lock_$lockfile[$p] ) >> .time1_$loop
[1] 9719387
STOP LAPW1 END
STOP
STOP LAPW1 END
STOP
STOP LAPW1 END
STOP
STOP LAPW1 END
STOP
[1] + Done ( cd $PWD; $t $ttt; rm -f
.lock_$lockfile[$p] ) >> .time1_$loop
** LAPW1 crashed!
4.3u 4.0s 0:19 43% 0+0k 34+36io 9pf+0w
If you have any suggestions, thank you
Giovanni Bertoni
Peter Blaha a écrit :
>
> Dear Florent,
>
> This is the second complain about the new mpi-parallel lapw1 version.
> This version was developed on an IBM SP4 in Garching and seems to run fine
> on these machines. This new version should run faster in the HNS part,
> where the old version had significant sequential overhead. However, it
> seems makes problems on both, Linux PCs and on this Alpha machine.
>
> Please use moduls.F_old and hns.F_old (only these two "old" files, not
> any others!!) and try to compile and run with the "old" version.
>
> PS: To all others who are running lapw1mpi: Please send me any info if you
> are able / have problems with the new lapw1mpi version ("new" means since
> September 2003). Unfortunately I have also only an IBM machine available
> for mpi runs - and here it runs fine.
>
--
More information about the Wien
mailing list