If we fail to allocate a new request, make sure we recover the pages that are in the process of being freed by inserting an RCU barrier. v2: Comment before the shrink and barrier in the error path. Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> Cc: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx> --- drivers/gpu/drm/i915/i915_gem_request.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c index 72bdc203716f..a0f451b4a4e8 100644 --- a/drivers/gpu/drm/i915/i915_gem_request.c +++ b/drivers/gpu/drm/i915/i915_gem_request.c @@ -696,6 +696,17 @@ i915_gem_request_alloc(struct intel_engine_cs *engine, if (ret) goto err_unreserve; + /* + * We've forced the client to stall and catch up with whatever + * backlog there might have been. As we are assuming that we + * caused the mempressure, now is an opportune time to + * recover as much memory from the request pool as is possible. + * Having already penalized the client to stall, we spend + * a little extra time to re-optimise page allocation. + */ + kmem_cache_shrink(dev_priv->requests); + rcu_barrier(); /* Recover the TYPESAFE_BY_RCU pages */ + req = kmem_cache_alloc(dev_priv->requests, GFP_KERNEL); if (!req) { ret = -ENOMEM; -- 2.15.1 _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx