[Wien] Problems in phonon calculations (3)
Peter Blaha
pblaha at theochem.tuwien.ac.at
Sat Jan 8 14:48:02 CET 2011
You need to understand what you are doing:
I guess you submit your job in a directory AB2_phonon.
The job creates a .machines file in THIS directory (look at it with some
editor and verify that it looks as expected (with 24 lines)).
Now understand the run_phonon job file.
Inside the loop over all 18 cases it has a:
cd case_$i and inside this directory it makes a run_lapw ...
So change into case_1 and examine the .machines file.
It will of course NOT show the 24 lines, but is some dummy-default .machines file with just 2 lines.
so before the cd case_$i insert a line:
cp .machines case_$i
into the run_phonon job.
PS: If your job is still running, you can also do this manually on the commandline (for all case_1 ... case_18)
and the proper .machines file will be used for all new iterations.
PPS: If I look into your cpu-time (more than 40 h !! for lapw1) I doubt that you want to wait
until this job has finished (even on 24 cpus). How large is your supercell - this determines how you can
reduce the k-mesh!
PPS: You should learn on small examples how to make meaningful calculations. You must examine for
YOUR system (nobody can help you here) and your properties, what a good RKMAX or k-mesh is.
In other words: caclculate phonons with a "low RKMAX and few k-points", and increase the parameters
until the frequencies did not change significantly anymore.
Am 08.01.2011 07:42, schrieb Ghosh SUDDHASATTWA:
> Dear Wien2k users, and Prof. Blaha,
>
> Can you please help me out in the following querie. As suggested earlier, I had initialized my phonon calculations with 48 atoms per lattice.
>
> I created and initialized case_1
>
> My phonon job file is
>
> #!/bin/csh -f
>
> #
>
> set file=AB2_phonon
>
> #
>
> foreach i ( \
>
> 1 \
> 17 \
>
> 18 \
>
> )
>
> cd case_$i
>
> echo running case_$i
>
> #
>
> # select other options if necessary
>
> run_lapw -I -i 200 -cc 0.0001 -in1ef -p -fc 0.1
>
> #
>
> # select other save-name if necessary
>
> save_lapw case_${i}_gga_rkm7.00_3000k
>
> cd ..
>
> end
>
> I submitted the job by qsub –pe kpoint 24 kpoint.sh
>
> My machine file is
>
> 1:ibnx70
>
> 1:ibnx70
>
> 1:ibnx70
>
> 1:ibnx81
>
> 1:ibnx81
>
> 1:ibnx81
>
> 1:ibnx81
>
> 1:ibnx81
>
> 1:ibnx79
>
> 1:ibnx79
>
> 1:ibnx79
>
> 1:ibnx65
>
> 1:ibnx65
>
> 1:ibnx60
>
> 1:ibnx60
>
> 1:ibnx60
>
> 1:ibnx60
>
> 1:ibnx60
>
> 1:ibnx60
>
> 1:ibnx60
>
> 1:ibnx60
>
> 1:ibnx96
>
> 1:ibnx96
>
> 1:ibnx96
>
> granularity:1
>
> extrafine:1
>
> For the last three days, the queuing shows the job is running
>
> Surprisingly, there is no “running” mode in w2web.
>
> I checked the case_1 directory
>
> case_1.dayfile shows (surprisingly)
>
> Calculating case_1 in /group5/cg/sghosh/WIEN2k/lapw/phonon/UZr2_phonon/case_1
>
> on nx70.igcar.gov.in with PID 10356
>
> start (Thu Jan 6 19:11:06 IST 2011) with lapw0 (-in1ef/99 to go)
>
> cycle 1 (Thu Jan 6 19:11:06 IST 2011) (-in1ef/99 to go)
>
>> lapw0 -p (19:11:06) starting parallel lapw0 at Thu Jan 6 19:11:06 IST 20
>
> 11
>
> -------- .machine0 : processors
>
> running lapw0 in single mode
>
> 177.410u 5.691s 3:11.36 95.6% 0+0k 0+0io 17pf+0w
>
>> lapw1 -c -p (19:14:17) starting parallel lapw1 at Thu Jan 6 19:14:1
>
> 8 IST 2011
>
> -> starting parallel LAPW1 jobs at Thu Jan 6 19:14:19 IST 2011
>
> running LAPW1 in parallel mode (using .machines)
>
> 2 number_of_parallel_jobs
>
> localhost(672) 90833.210u 359.642s 1+01:28:03.65 99.46% 0+0k 0+0io 0pf
>
> +0w
>
> 105493.144u 437.929s 1+13:49:58.03 77.78% 0+0k 0+0io 0pf+0w
>
> 120959.238u 433.725s 1+09:50:20.14 99.65% 0+0k 0+0io 0pf+0w
>
> localhost(672) 90579.555u 334.655s 1+01:22:25.95 99.53% 0+0k 0+0io 0pf
>
> +0w
>
> 105170.999u 500.369s 1+13:44:24.32 77.78% 0+0k 0+0io 0pf+0w
>
> 120363.259u 426.543s 1+09:40:41.44 99.63% 0+0k 0+0io 0pf+0w
>
> Summary of lapw1para:
>
> localhost k=1344 user=181413 wallclock=170
>
> 73.491u 124.896s 33:50:25.33 0.1% 0+0k 0+0io 0pf+0w
>
>> lapw2 -c -p (05:04:43) running LAPW2 in parallel mode
>
> [1] 4250
>
> [2] 4292
>
> That is 2 number of parallel jobs. I thought it should be 24
>
> Can you please why is this happening?
>
> Thank you
>
> Suddhasattwa
>
> SUDDHASATTWAGHOSH
>
> Scientific Officer (D)
>
> Pyrochemical Process Studies Section
>
> Fuel Chemistry Division
>
> Chemistry Group
>
> IndiraGandhi Centre for Atomic Research
>
> Kalpakkam
>
> Tamilnadu
>
> 603102
>
> India
>
> Phone: 91-44-27480500 (Ext:24283)
>
> Fax: 91-44-27480065
>
>
>
> _______________________________________________
> Wien mailing list
> Wien at zeus.theochem.tuwien.ac.at
> http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
--
-----------------------------------------
Peter Blaha
Inst. Materials Chemistry, TU Vienna
Getreidemarkt 9, A-1060 Vienna, Austria
Tel: +43-1-5880115671
Fax: +43-1-5880115698
email: pblaha at theochem.tuwien.ac.at
-----------------------------------------
More information about the Wien
mailing list