On Fri, 13 Jul 2012 10:34:43 +0200, Daniel Vetter <daniel at ffwll.ch> wrote: > On Thu, Jul 12, 2012 at 09:07:52PM +0100, Chris Wilson wrote: > > On Thu, 12 Jul 2012 21:37:16 +0200, Daniel Vetter <daniel at ffwll.ch> wrote: > > > On Thu, Jul 12, 2012 at 04:13:36PM +0100, Chris Wilson wrote: > > > > obj->base.write_domain = 0; > > > > - list_del_init(&obj->gpu_write_list); > > > > + obj->pending_gpu_write = false; > > > > i915_gem_object_move_to_inactive(obj); > > > > > > Hm, this hunk makes me wonder whether we don't leak some bogus state > > > accross a gpu reset. Can't we just reuse the move_to_inactive helper here, > > > ensuring that we consistenly clear up any active state? > > > > Yes. I had planned to finish off with another patch to remove that pair > > of then redundant lines, but apparently forgot. I think it is better > > expressed as a second patch, at least from the point of view of how I > > was applying the transformations. > > Actually I've been confused yesterday, we do call move_to_inactive in the > next line ;-) > > I've looked again at your patch, and the changes to how we handle > obj->pending_pug_write look buggy. The above hunk is superflous, because > move_to_inactive will do that. Right, but the transformation is a two stage process. It was more obvious to do the replacement of gpu_write_list with pending_gpu_write and then do the removal of duplicate lines later. > The hunk int i915_gem_execbuffer's is buggy, we can't clear > pending_gpu_write there - we use that to decide whether we need to stall > for the gpu when the cpu does a read-only accesss. We then stall > completely because we don't keep track of the last write seqno separately. No. Because by the time the previous breadcrumb has been seen we are guarranteed to have flushed the gpu caches. So any wait in the future we have flushed the caches before returning. The next patch will be to remove the obsolete pending_gpu_write. Hopefully that will make you feel calmer. -Chris -- Chris Wilson, Intel Open Source Technology Centre