Having resolved whether or not we would deadlock upon a call to mutex_lock(&dev->struct_mutex), we can then spin for the contended struct_mutex if we are not the owner. We cannot afford to simply block and wait for the mutex, as the owner may itself be waiting for the allocator -- i.e. a cyclic deadlock. This should significantly improve the chance of running the shrinker for other processes whilst the GPU is busy. A more balanced approach would be to optimistically spin whilst the mutex owner was on the cpu and there was an opportunity to acquire the mutex for ourselves quickly. However, that requires support from kernel/locking/ and a new mutex_spin_trylock() primitive. Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> Cc: Mika Kuoppala <mika.kuoppala@xxxxxxxxxxxxxxx> Cc: Joonas Lahtinen <joonas.lahtinen@xxxxxxxxxxxxxxx> --- drivers/gpu/drm/i915/i915_gem_shrinker.c | 19 ++++++++++++------- 1 file changed, 12 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem_shrinker.c b/drivers/gpu/drm/i915/i915_gem_shrinker.c index 58f27369183c..1032f98add11 100644 --- a/drivers/gpu/drm/i915/i915_gem_shrinker.c +++ b/drivers/gpu/drm/i915/i915_gem_shrinker.c @@ -38,16 +38,21 @@ static bool shrinker_lock(struct drm_i915_private *dev_priv, bool *unlock) { switch (mutex_trylock_recursive(&dev_priv->drm.struct_mutex)) { - case MUTEX_TRYLOCK_FAILED: - return false; - - case MUTEX_TRYLOCK_SUCCESS: - *unlock = true; - return true; - case MUTEX_TRYLOCK_RECURSIVE: *unlock = false; return true; + + case MUTEX_TRYLOCK_FAILED: + do { + cpu_relax(); + if (mutex_trylock(&dev_priv->drm.struct_mutex)) { + case MUTEX_TRYLOCK_SUCCESS: + *unlock = true; + return true; + } + } while (!need_resched()); + + return false; } BUG(); -- 2.11.0 _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx