Re: [RFC] mm,drm/i915: Mark pinned shmemfs pages as unevictable

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Quoting Michal Hocko (2017-06-06 13:14:18)
> On Tue 06-06-17 13:04:36, Chris Wilson wrote:
> > Similar in principle to the treatment of get_user_pages, pages that
> > i915.ko acquires from shmemfs are not immediately reclaimable and so
> > should be excluded from the mm accounting and vmscan until they have
> > been returned to the system via shrink_slab/i915_gem_shrink. By moving
> > the unreclaimable pages off the inactive anon lru, not only should
> > vmscan be improved by avoiding walking unreclaimable pages, but the
> > system should also have a better idea of how much memory it can reclaim
> > at that moment in time.
> 
> That is certainly desirable. Peter has proposed a generic pin_page (or
> similar) API. What happened with it? I think it would be a better
> approach than (ab)using mlock API. I am also not familiar with the i915
> code to be sure that using lock_page is really safe here. I think that
> all we need is to simply move those pages in/out to/from unevictable LRU
> list on pin/unpining.

I just had the opportunity to try this mlock_vma_page() hack on a
borderline swapping system (i.e. lots of vmpressure between i915 buffers
and the buffercache), and marking the i915 pages as unevictable makes a
huge difference in avoiding stalls in direct reclaim across the system.

Reading back over the thread, it seems that the simplest approach going
forward is a small api for managing the pages on the unevictable LRU?

> > Note, however, the interaction with shrink_slab which will move some
> > mlocked pages back to the inactive anon lru.
> > 
> > Suggested-by: Dave Hansen <dave.hansen@xxxxxxxxx>
> > Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx>
> > Cc: Joonas Lahtinen <joonas.lahtinen@xxxxxxxxxxxxxxx>
> > Cc: Matthew Auld <matthew.auld@xxxxxxxxx>
> > Cc: Dave Hansen <dave.hansen@xxxxxxxxx>
> > Cc: "Kirill A . Shutemov" <kirill.shutemov@xxxxxxxxxxxxxxx>
> > Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
> > Cc: Michal Hocko <mhocko@xxxxxxxx>
> > ---
> >  drivers/gpu/drm/i915/i915_gem.c | 17 ++++++++++++++++-
> >  mm/mlock.c                      |  2 ++
> >  2 files changed, 18 insertions(+), 1 deletion(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> > index 8cb811519db1..37a98fbc6a12 100644
> > --- a/drivers/gpu/drm/i915/i915_gem.c
> > +++ b/drivers/gpu/drm/i915/i915_gem.c
> > @@ -2193,6 +2193,9 @@ void __i915_gem_object_truncate(struct drm_i915_gem_object *obj)
> >       obj->mm.pages = ERR_PTR(-EFAULT);
> >  }
> >  
> > +extern void mlock_vma_page(struct page *page);
> > +extern unsigned int munlock_vma_page(struct page *page);
> > +
> >  static void
> >  i915_gem_object_put_pages_gtt(struct drm_i915_gem_object *obj,
> >                             struct sg_table *pages)
> > @@ -2214,6 +2217,10 @@ i915_gem_object_put_pages_gtt(struct drm_i915_gem_object *obj,
> >               if (obj->mm.madv == I915_MADV_WILLNEED)
> >                       mark_page_accessed(page);
> >  
> > +             lock_page(page);
> > +             munlock_vma_page(page);
> > +             unlock_page(page);
> > +
> >               put_page(page);
> >       }
> >       obj->mm.dirty = false;
> > @@ -2412,6 +2419,10 @@ i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
> >               }
> >               last_pfn = page_to_pfn(page);
> >  
> > +             lock_page(page);
> > +             mlock_vma_page(page);
> > +             unlock_page(page);
> > +
> >               /* Check that the i965g/gm workaround works. */
> >               WARN_ON((gfp & __GFP_DMA32) && (last_pfn >= 0x00100000UL));
> >       }
> > @@ -2450,8 +2461,12 @@ i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
> >  err_sg:
> >       sg_mark_end(sg);
> >  err_pages:
> > -     for_each_sgt_page(page, sgt_iter, st)
> > +     for_each_sgt_page(page, sgt_iter, st) {
> > +             lock_page(page);
> > +             munlock_vma_page(page);
> > +             unlock_page(page);
> >               put_page(page);
> > +     }
> >       sg_free_table(st);
> >       kfree(st);
> >  
> > diff --git a/mm/mlock.c b/mm/mlock.c
> > index b562b5523a65..531d9f8fd033 100644
> > --- a/mm/mlock.c
> > +++ b/mm/mlock.c
> > @@ -94,6 +94,7 @@ void mlock_vma_page(struct page *page)
> >                       putback_lru_page(page);
> >       }
> >  }
> > +EXPORT_SYMBOL_GPL(mlock_vma_page);
> >  
> >  /*
> >   * Isolate a page from LRU with optional get_page() pin.
> > @@ -211,6 +212,7 @@ unsigned int munlock_vma_page(struct page *page)
> >  out:
> >       return nr_pages - 1;
> >  }
> > +EXPORT_SYMBOL_GPL(munlock_vma_page);
> >  
> >  /*
> >   * convert get_user_pages() return value to posix mlock() error
> > -- 
> > 2.11.0
> > 
> 
> -- 
> Michal Hocko
> SUSE Labs
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]
  Powered by Linux