When dma_fence_signal() is called, it sets a flag to indicate the fenece is complete. Before the dma_fence is signaled, the seqno check will first be passed. However, if the request is being held by an observer across a global seqno reset, it is possible for the observer to not notice the seqno completion (as the HWS is now zero), but instead we rely on the signaled bit. External holders of the request should be using the fence interface which does first check the signaled bit - it will only be an unlocked waiter that may not notice the request completion. By that analysis, the problem is old as our unlocked waits, but this patch depends upon subclassing dma-fence, so use that as the base commit. Fixes: 04769652c8c7 ("drm/i915: Derive GEM requests from dma-fence") Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> Cc: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxxxxxxxx> Cc: <stable@xxxxxxxxxxxxxxx> # v4.9+ --- drivers/gpu/drm/i915/i915_drv.h | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 994929660027..dcc8f73b26bf 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -4026,6 +4026,9 @@ __i915_request_irq_complete(struct drm_i915_gem_request *req) { struct intel_engine_cs *engine = req->engine; + if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &req->fence.flags)) + return true; + /* Before we do the heavier coherent read of the seqno, * check the value (hopefully) in the CPU cacheline. */ -- 2.11.0 _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx