[Wien] lapw0 problem after supercell
=?iso-2022-jp?B?GyRCMiZaXxsoQg==?=
hao at qs.t.u-tokyo.ac.jp
Tue May 9 13:14:43 CEST 2006
Dear Stefaan:
I *have checked* the archives, that was why I also put the limit result on
email. I was not sure then this was the problem of stack size and could not
change the setup of limit by myself. Please do not doubt my reliability. I
will not intentionally waste anyone’s time.
Warm regards
Dear Peter:
Thank you very much for your patience. The stack size had been changed to
2097152 kbytes. It is ok now for lapw0. But after this, scf stop
automatically. Following is the output:
Lapw1.error:
** Error in Parallel LAPW1
** LAPW1 STOPPED at Tue May 9 19:04:58 JST 2006
** check ERROR FILES!
Error in LAPW1
Error in LAPW1
Error in LAPW1
Error in LAPW1
Error in LAPW1
Dayfile is:
on reed with PID 16555
start (Tue May 9 19:03:41 JST 2006) with lapw0 (40/20 to go)
cycle 1 (Tue May 9 19:03:41 JST 2006) (40/20 to go)
> lapw0 -p (19:03:41) starting parallel lapw0 at Tue May 9 19:03:41
JST 2006
--------
running lapw0 in single mode
52.642u 15.747s 1:08.39 99.9% 0+0k 0+0io 1498pf+0w
> lapw1 -c -p (19:04:49) starting parallel lapw1 at Tue May 9 19:
04:49 JST 2006
-> starting parallel LAPW1 jobs at Tue May 9 19:04:49 JST 2006
running LAPW1 in parallel mode (using .machines)
4 number_of_parallel_jobs
** LAPW1 crashed!
0.125u 0.200s 0:09.04 3.5% 0+0k 0+0io 12003pf+0w
* stop error
and lapw1_*.error have contents:
Error in LAPW1
Also gaas.scf1_* and gaas.output1_* exist.
I think there is nothing to do with NFS and mpi ( not in use) because there
is no problem for other materials. Here I increased NUME to 3000.
What is wrong?
Thank you very much
Best wishes
Hao
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://zeus.theochem.tuwien.ac.at/pipermail/wien/attachments/20060509/e75f71d7/attachment.html
More information about the Wien
mailing list