When we wedge the device, we clear out the in-flight requests and advance the breadcrumb to indicate they are complete. However, the breadcrumb advance includes an assert that the engine is idle, so that advancement needs to be the last step to ensure we pass our own sanity checks. Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> --- drivers/gpu/drm/i915/i915_gem.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 580b5042f4f7..40e94b4ef532 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -3114,13 +3114,6 @@ static void engine_set_wedged(struct intel_engine_cs *engine) dma_fence_set_error(&request->fence, -EIO); spin_unlock_irqrestore(&engine->timeline->lock, flags); - /* Mark all pending requests as complete so that any concurrent - * (lockless) lookup doesn't try and wait upon the request as we - * reset it. - */ - intel_engine_init_global_seqno(engine, - intel_engine_last_submit(engine)); - /* * Clear the execlists queue up before freeing the requests, as those * are the ones that keep the context and ringbuffer backing objects @@ -3149,6 +3142,13 @@ static void engine_set_wedged(struct intel_engine_cs *engine) */ clear_bit(ENGINE_IRQ_EXECLIST, &engine->irq_posted); } + + /* Mark all pending requests as complete so that any concurrent + * (lockless) lookup doesn't try and wait upon the request as we + * reset it. + */ + intel_engine_init_global_seqno(engine, + intel_engine_last_submit(engine)); } static int __i915_gem_set_wedged_BKL(void *data) -- 2.13.2 _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx