I noticed that the cluster_duration column for the job instance table is no longer populated even though monitord does generate an event that says what the cluster_duration is
for example
ts=2012-03-27T00:19:01.000000Z event=stampede.job_inst.main.end level=Info status=0 stdout.file=merge_pegasus-findrange-4.0_PID2_ID1.out.000 cluster.start=1332807415 stderr.text="" js.id=4 xwf.id=16d0cd12-fb3a-4429-9d10-185e7f38c07f job.id=merge_pegasus-findrange-4.0_PID2_ID1 site=local local.dur=121 work_dir=/lfs1/work/pegasus-features/PM-592/work/vahi/pegasus/blackdiamond/run0001 user=vahi multiplier_factor=1 stdout.text="#@ 1 stdout%0A%0A#@ 1 stderr%0A%0A#@ 2 stdout%0A%0A#@ 2 stderr%0A%0A" exitcode=0 stderr.file=merge_pegasus-findrange-4.0_PID2_ID1.err.000 cluster.dur=120.228 job_inst.id=4 sched.id=3559.0
Note there is a cluster.dur attribute mentioned above and is consistent with the yang schema.
I am attaching a bp file for the whole workflow.