If we declare the driver wedged before the GPU truly is, then we may see the GPU complete some CS events following our cancellation. This leaves us quite confused as we deleted all the bookkeeping and thus complain about the inconsistent state. We can just ignore the remaining events and let the GPU idle by not feeding it, and so avoid trying to racily overwrite shared state. We rely on there being a full GPU reset before unwedging, giving us the opportunity to reset the shared state. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=107188 Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> --- drivers/gpu/drm/i915/intel_lrc.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c index 8fd8de71c2b5..6fef9d130d55 100644 --- a/drivers/gpu/drm/i915/intel_lrc.c +++ b/drivers/gpu/drm/i915/intel_lrc.c @@ -810,6 +810,11 @@ static void reset_csb_pointers(struct intel_engine_execlists *execlists) WRITE_ONCE(*execlists->csb_write, execlists->csb_write_reset); } +static void nop_submission_tasklet(unsigned long data) +{ + /* The driver is wedged; don't process any more events. */ +} + static void execlists_cancel_requests(struct intel_engine_cs *engine) { struct intel_engine_execlists * const execlists = &engine->execlists; @@ -869,6 +874,9 @@ static void execlists_cancel_requests(struct intel_engine_cs *engine) execlists->queue = RB_ROOT_CACHED; GEM_BUG_ON(port_isset(execlists->port)); + GEM_BUG_ON(__tasklet_is_enabled(&execlists->tasklet)); + execlists->tasklet.func = nop_submission_tasklet; + spin_unlock_irqrestore(&engine->timeline.lock, flags); } -- 2.18.0 _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx