We always try to do an unlocked wait before resorting to having a blocking wait under the mutex - so we very rarely have to sleep under the struct_mutex. However, when we do we want that wait to be as short as possible as the struct_mutex is our BKL that will stall the driver and all clients. There should be no impact for all typical workloads. v2: Move down a layer to apply to all waits. Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> Cc: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx> Cc: Joonas Lahtinen <joonas.lahtinen@xxxxxxxxxxxxxxx> Cc: Daniel Vetter <daniel.vetter@xxxxxxxx> --- drivers/gpu/drm/i915/i915_gem_request.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c index bacb875a6ef3..7be17d9c304b 100644 --- a/drivers/gpu/drm/i915/i915_gem_request.c +++ b/drivers/gpu/drm/i915/i915_gem_request.c @@ -1054,6 +1054,15 @@ long i915_wait_request(struct drm_i915_gem_request *req, if (!timeout) return -ETIME; + /* Very rarely do we wait whilst holding the mutex. We try to always + * do an unlocked wait before using a locked wait. However, when we + * have to resort to a locked wait, we want that wait to be as short + * as possible as the struct_mutex is our BKL that will stall the + * driver and all clients. + */ + if (flags & I915_WAIT_LOCKED && req->engine->schedule) + req->engine->schedule(req, I915_PRIORITY_MAX); + trace_i915_gem_request_wait_begin(req); add_wait_queue(&req->execute, &exec); -- 2.11.0 _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx