Discussed this a bit with Chris, I think a comment here is warranted that this will be bad once we have more than one i915 instance. And lockdep won't catch it. Cc: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> Cc: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx> Signed-off-by: Daniel Vetter <daniel.vetter@xxxxxxxxx> --- drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c index 74da35611d7c..70dc506a5426 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c @@ -135,6 +135,12 @@ userptr_mn_invalidate_range_start(struct mmu_notifier *_mn, switch (mutex_trylock_recursive(unlock)) { default: case MUTEX_TRYLOCK_FAILED: + /* + * NOTE: This only works because there's only + * ever one i915-style struct_mutex in the + * entire system. If we could have two i915 + * instances, this would deadlock. + */ if (mutex_lock_killable_nested(unlock, I915_MM_SHRINKER)) { i915_gem_object_put(obj); return -EINTR; -- 2.22.0 _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx