If an interrupt arrives whilst we are performing the irq-seqno barrier, recheck the seqno again before returning. Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> Cc: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx> --- drivers/gpu/drm/i915/i915_drv.h | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index c1b400f1ede4..ecb8b414bdd2 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -4102,6 +4102,9 @@ __i915_request_irq_complete(const struct drm_i915_gem_request *req) if (__i915_gem_request_completed(req, seqno)) return true; + if (!engine->irq_seqno_barrier) + return false; + /* Ensure our read of the seqno is coherent so that we * do not "miss an interrupt" (i.e. if this is the last * request and the seqno write from the GPU is not visible @@ -4113,8 +4116,7 @@ __i915_request_irq_complete(const struct drm_i915_gem_request *req) * but it is easier and safer to do it every time the waiter * is woken. */ - if (engine->irq_seqno_barrier && - test_and_clear_bit(ENGINE_IRQ_BREADCRUMB, &engine->irq_posted)) { + while (test_and_clear_bit(ENGINE_IRQ_BREADCRUMB, &engine->irq_posted)) { unsigned long flags; /* The ordering of irq_posted versus applying the barrier -- 2.11.0 _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx