On Mon, Jan 04, 2016 at 03:58:21PM +0000, Dave Gordon wrote: > On 16/12/15 10:19, Chris Wilson wrote: > >On Wed, Dec 16, 2015 at 10:44:19AM +0100, Daniel Vetter wrote: > >>On Fri, Dec 11, 2015 at 11:33:03AM +0000, Chris Wilson wrote: > >>>@@ -1288,7 +1286,7 @@ int __i915_wait_request(struct drm_i915_gem_request *req, > >>> > >>> /* We need to check whether any gpu reset happened in between > >>> * the caller grabbing the seqno and now ... */ > >>>- if (reset_counter != i915_reset_counter(&dev_priv->gpu_error)) { > >>>+ if (req->reset_counter != i915_reset_counter(&dev_priv->gpu_error)) { > >> > >>READ_ONCE(req->reset) as a future proofing maybe for TDR? Or punt that to > >>TDR? Could be confusing to READ_ONCE something that's (with the current > >>code at least) invariant. > > > >Right, my plan was to solve this in TDR! At the moment, I am thinking > >that we regenerate a new request for each one resubmitted (that requires > >patching the seqno embedded into the ring before restarting the ring), > >but otherwise we can just duplicate the contents of the request. > > With the scheduler, there's no need to patch the contents of the > ringbuffer; we just reset it (so it's empty) and then have the > scheduler re-emit the requests, starting after the one that hung. Which breaks __i915_wait_request, since it has already noticed that the reset counter has changed. Hence the discussion. -Chris -- Chris Wilson, Intel Open Source Technology Centre _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/intel-gfx