On 03/12/2019 11:53, Chris Wilson wrote:
Only along the submission path can we guarantee that the locked request
is indeed from a foreign engine, and so the nesting of engine/rq is
permissible. On the submission tasklet (process_csb()), we may find
ourselves competing with the normal nesting of rq/engine, invalidating
our nesting. As we only use the spinlock for debug purposes, skip the
debug if we cannot acquire the spinlock for safe validation - catching
99% of the bugs is better than causing a hard lockup.
Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx>
---
drivers/gpu/drm/i915/gt/intel_lrc.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
index 37ab9742abe7..b411e4ce6771 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -1300,7 +1300,6 @@ assert_pending_valid(const struct intel_engine_execlists *execlists,
}
for (port = execlists->pending; (rq = *port); port++) {
- unsigned long flags;
bool ok = true;
GEM_BUG_ON(!kref_read(&rq->fence.refcount));
@@ -1315,8 +1314,8 @@ assert_pending_valid(const struct intel_engine_execlists *execlists,
ce = rq->hw_context;
/* Hold tightly onto the lock to prevent concurrent retires! */
- spin_lock_irqsave_nested(&rq->lock, flags,
- SINGLE_DEPTH_NESTING);
+ if (!spin_trylock(&rq->lock))
+ continue;
if (i915_request_completed(rq))
goto unlock;
@@ -1347,7 +1346,7 @@ assert_pending_valid(const struct intel_engine_execlists *execlists,
}
unlock:
- spin_unlock_irqrestore(&rq->lock, flags);
+ spin_unlock(&rq->lock);
if (!ok)
return false;
}
With Fixes: and irqsave variant:
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx>
Regards,
Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx