Re: [PATCH 03/10] drm/i915: Shrink the GEM kmem_caches upon idling

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Quoting Tvrtko Ursulin (2018-01-16 10:00:16)
> 
> On 15/01/2018 21:24, Chris Wilson wrote:
> > When we finally decide the gpu is idle, that is a good time to shrink
> > our kmem_caches.
> > 
> > Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx>
> > ---
> >   drivers/gpu/drm/i915/i915_gem.c | 22 ++++++++++++++++++++++
> >   1 file changed, 22 insertions(+)
> > 
> > diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> > index a8840a514377..8547f5214599 100644
> > --- a/drivers/gpu/drm/i915/i915_gem.c
> > +++ b/drivers/gpu/drm/i915/i915_gem.c
> > @@ -4709,6 +4709,21 @@ i915_gem_retire_work_handler(struct work_struct *work)
> >       }
> >   }
> >   
> > +static void shrink_caches(struct drm_i915_private *i915)
> > +{
> > +     /*
> > +      * kmem_cache_shrink() discards empty slabs and reorders partially
> > +      * filled slabs to prioritise allocating from the mostly full slabs,
> > +      * with the aim of reducing fragmentation.
> > +      */
> 
> This makes it sound like it would be a very good thing in general.
> 
> > +     kmem_cache_shrink(i915->priorities);
> > +     kmem_cache_shrink(i915->dependencies);
> > +     kmem_cache_shrink(i915->requests);
> > +     kmem_cache_shrink(i915->luts);
> > +     kmem_cache_shrink(i915->vmas);
> > +     kmem_cache_shrink(i915->objects);
> > +}
> > +
> >   static inline bool
> >   new_requests_since_last_retire(const struct drm_i915_private *i915)
> >   {
> > @@ -4796,6 +4811,13 @@ i915_gem_idle_work_handler(struct work_struct *work)
> >               GEM_BUG_ON(!dev_priv->gt.awake);
> >               i915_queue_hangcheck(dev_priv);
> >       }
> > +
> > +     rcu_barrier();
> 
> Ugh, more sprinkled around complexity we add the more difficult it 
> becomes to maintain the code base for mere mortals. At the very minimum 
> a comment is needed here.

This one is because some of our kmem caches (i.e. requests) are special
and use TYPESAFE_BY_RCU which means we don't release the pages until
after a RCU grace period. This is just to encourage that we have a grace
period between each idle event. Though it looks a sensible to tie in a
grace period around kmem_cache_shrink, it really only takes effect after
so this is just to ensure the pages we used last time are given back.
 
> What about activity other than requests? Like active mmap_gtt, that 
> might at least create busyness on the vma and object caches and is not 
> correlated to idle work handler firing.

We only have a single periodic ticker atm... Patches to add a similar ticker
for GTT mmaps to coordinate with rpm etc lack some love.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux