If we fail to allocate a new request, make sure we recover the pages that are in the process of being freed by inserting an RCU barrier. Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> --- drivers/gpu/drm/i915/i915_gem_request.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c index 86c86a17d558..94429405e776 100644 --- a/drivers/gpu/drm/i915/i915_gem_request.c +++ b/drivers/gpu/drm/i915/i915_gem_request.c @@ -696,6 +696,9 @@ i915_gem_request_alloc(struct intel_engine_cs *engine, if (ret) goto err_unreserve; + kmem_cache_shrink(dev_priv->requests); + rcu_barrier(); + req = kmem_cache_alloc(dev_priv->requests, GFP_KERNEL); if (!req) { ret = -ENOMEM; -- 2.15.1 _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx