Since we can only swap out shmemfs objects, those are the only ones that influence the ability of the shrinker to can free pages. Currently, all non-shmemfs objects have a raised pages_pin_count to protect them from the shrinker, so this just makes the logic for can_release_pages() clearer (and safer in future so that we don't over estimate our ability to free up pages from future non-swappable objects). Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> Cc: Joonas Lahtinen <joonas.lahtinen@xxxxxxxxxxxxxxx> --- drivers/gpu/drm/i915/i915_gem_shrinker.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem_shrinker.c b/drivers/gpu/drm/i915/i915_gem_shrinker.c index cb225e039d48..e44c6358bd5a 100644 --- a/drivers/gpu/drm/i915/i915_gem_shrinker.c +++ b/drivers/gpu/drm/i915/i915_gem_shrinker.c @@ -70,6 +70,10 @@ static bool swap_available(void) static bool can_release_pages(struct drm_i915_gem_object *obj) { + /* Only shmemfs objects are backed by swapped */ + if (!obj->base.filp) + return false; + /* Only report true if by unbinding the object and putting its pages * we can actually make forward progress towards freeing physical * pages. @@ -349,18 +353,12 @@ i915_gem_shrinker_oom(struct notifier_block *nb, unsigned long event, void *ptr) */ unbound = bound = unevictable = 0; list_for_each_entry(obj, &dev_priv->mm.unbound_list, global_list) { - if (!obj->base.filp) /* not backed by a freeable object */ - continue; - if (!can_release_pages(obj)) unevictable += obj->base.size >> PAGE_SHIFT; else unbound += obj->base.size >> PAGE_SHIFT; } list_for_each_entry(obj, &dev_priv->mm.bound_list, global_list) { - if (!obj->base.filp) - continue; - if (!can_release_pages(obj)) unevictable += obj->base.size >> PAGE_SHIFT; else -- 2.8.1 _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx