[PATCH] drm/i915: Enforce TYPESAFE_BY_RCU vs refcount mb on reinitialisation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



By using TYPESAFE_BY_RCU, we accept that requests may be swapped out from
underneath us, even when using rcu_read_lock(). We use a strong barrier
on acquiring the refcount during lookup, but this needs to be paired
with a barrier on re-initialising it. Currently we call dma_fence_init,
which ultimately does a plain atomic_set(1) on the refcount, not
providing any memory barriers. As we inspect some state before even
acquiring the refcount in the lookup (by arguing that we can detect
inconsistent requests), that state should be initialised before the
refcount.

Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx>
---
 drivers/gpu/drm/i915/i915_request.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
index 5c2c93cbab12..04a0b8e75533 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -768,6 +768,13 @@ i915_request_alloc(struct intel_engine_cs *engine, struct i915_gem_context *ctx)
 	rq->timeline = ce->ring->timeline;
 	GEM_BUG_ON(rq->timeline == &engine->timeline);
 
+	/*
+	 * In order to coordinate with our RCU lookup,
+	 * __i915_gem_active_get_rcu(), we need to ensure that the change
+	 * to rq->engine is visible before acquring the refcount in the lookup.
+	 */
+	smp_wmb();
+
 	spin_lock_init(&rq->lock);
 	dma_fence_init(&rq->fence,
 		       &i915_fence_ops,
-- 
2.18.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux