On Fri, Sep 15, 2017 at 05:59:30PM +0100, Chris Wilson wrote: > Quoting Chris Wilson (2017-09-15 17:49:16) > > diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h > > index abf171c3cb9c..04fc50c993bf 100644 > > --- a/drivers/gpu/drm/i915/intel_ringbuffer.h > > +++ b/drivers/gpu/drm/i915/intel_ringbuffer.h > > @@ -306,6 +306,14 @@ struct intel_engine_cs { > > void (*schedule)(struct drm_i915_gem_request *request, > > int priority); > > > > + /* > > + * Cancel all requests on the hardware, or queued for execution. > > + * > > + * This is called under the engine->timeline->lock when marking > > + * the device as wedged. > > + */ > > + void (*cancel_all_requests)(struct intel_engine_cs *engine); > > cancel_all_requests is a bit too broad, could just shorten it to > cancel_requests with the doc explaining that we only cancel the requests > that have been submitted to the engine (not the not-yet-ready requests > still floating in the aether). > -Chris I agree. Note, that we're still doing part of the work directly in submit_notify: /* Mark all executing requests as skipped */ list_for_each_entry(request, &engine->timeline->requests, link) { GEM_BUG_ON(!request->global_seqno); if (!i915_gem_request_completed(request)) dma_fence_set_error(&request->fence, -EIO); } Perhaps it would be cleaner if we could extract that (cancel_requests_inflight?) and us it as a cancel_requests for legacy ringbuf? (then just call into that in cancel_requests for execlists) Thoughts? -Michał _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx