We only need to care about the ordering of the clearing of the bit with the uncached CSB read in order to correctly detect a new interrupt before the read completes. The uncached read itself acts as a full memory barrier, so we do not to enforce another in the form of a locked clear_bit. Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> Cc: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx> --- drivers/gpu/drm/i915/intel_lrc.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c index 59acdd3b7a12..c9010fa881b4 100644 --- a/drivers/gpu/drm/i915/intel_lrc.c +++ b/drivers/gpu/drm/i915/intel_lrc.c @@ -540,7 +540,14 @@ static void intel_lrc_irq_handler(unsigned long data) dev_priv->regs + i915_mmio_reg_offset(RING_CONTEXT_STATUS_BUF_LO(engine, 0)); unsigned int csb, head, tail; - clear_bit(ENGINE_IRQ_EXECLIST, &engine->irq_posted); + /* The write will be ordered by the uncached read (itself + * a memory barrier), we do need another in the form a + * locked instruction. That is the clear of IRQ_EXECLIST bit + * will be visible to another cpu prior to the completion + * of the CSB read. If the other cpu receives an interrupt + * before then, we will redo the CSB read. + */ + __clear_bit(ENGINE_IRQ_EXECLIST, &engine->irq_posted); csb = readl(csb_mmio); head = GEN8_CSB_READ_PTR(csb); tail = GEN8_CSB_WRITE_PTR(csb); -- 2.11.0 _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx