i915_gem_shrink() will scan the bound list only if device is not suspended but in OOM failure scenario it becomes absolutely necessary to release as much memory as possible. Also in allocation failure from vmap address space, it is incumbent on the Driver to reap all its vmaps. So, adding rpm get/put in i915_gem_shrinker_oom() and i915_gem_shrinker_vmap() to ensure shrinking of bound objects as well. Signed-off-by: Praveen Paneri <praveen.paneri@xxxxxxxxx> Reviewed-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> --- drivers/gpu/drm/i915/i915_gem_shrinker.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/gpu/drm/i915/i915_gem_shrinker.c b/drivers/gpu/drm/i915/i915_gem_shrinker.c index 9a24415..79004f3 100644 --- a/drivers/gpu/drm/i915/i915_gem_shrinker.c +++ b/drivers/gpu/drm/i915/i915_gem_shrinker.c @@ -357,7 +357,9 @@ i915_gem_shrinker_oom(struct notifier_block *nb, unsigned long event, void *ptr) if (!i915_gem_shrinker_lock_uninterruptible(dev_priv, &slu, 5000)) return NOTIFY_DONE; + intel_runtime_pm_get(dev_priv); freed_pages = i915_gem_shrink_all(dev_priv); + intel_runtime_pm_put(dev_priv); /* Because we may be allocating inside our own driver, we cannot * assert that there are no objects with pinned pages that are not @@ -410,11 +412,13 @@ i915_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr if (ret) goto out; + intel_runtime_pm_get(dev_priv); freed_pages += i915_gem_shrink(dev_priv, -1UL, I915_SHRINK_BOUND | I915_SHRINK_UNBOUND | I915_SHRINK_ACTIVE | I915_SHRINK_VMAPS); + intel_runtime_pm_put(dev_priv); /* We also want to clear any cached iomaps as they wrap vmap */ list_for_each_entry_safe(vma, next, -- 1.9.1 _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx