Following a GPU reset upon hang, we retire all the requests and then mark them all as complete. If we mark them as complete first, we both keep the normal retirement order (completed first then retired) and provide a small optimisation for concurrent lookups. Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> --- drivers/gpu/drm/i915/i915_gem.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 93a874b0ba14..f6f039aad6e2 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -2200,6 +2200,12 @@ static void i915_gem_reset_engine_cleanup(struct intel_engine_cs *engine) { struct intel_ring *ring; + /* Mark all pending requests as complete so that any concurrent + * (lockless) lookup doesn't try and wait upon the request as we + * reset it. + */ + intel_engine_init_seqno(engine, engine->last_submitted_seqno); + /* * Clear the execlists queue up before freeing the requests, as those * are the ones that keep the context and ringbuffer backing objects @@ -2241,8 +2247,6 @@ static void i915_gem_reset_engine_cleanup(struct intel_engine_cs *engine) ring->last_retired_head = ring->tail; intel_ring_update_space(ring); } - - intel_engine_init_seqno(engine, engine->last_submitted_seqno); } void i915_gem_reset(struct drm_device *dev) -- 2.8.1 _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx