Re: [PATCH v2 5/5] drm/i915: Start writeback from the shrinker

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Quoting Joonas Lahtinen (2017-06-13 15:07:04)
> On pe, 2017-06-09 at 12:03 +0100, Chris Wilson wrote:
> > When we are called to relieve mempressue via the shrinker, the only way
> > we can make progress is either by discarding unwanted pages (those
> > objects that userspace has marked MADV_DONTNEED) or by reclaiming the
> > dirty objects via swap. As we know that is the only way to make further
> > progress, we can initiate the writeback as we invalidate the objects.
> > This means the objects we put onto the inactive anon lru list are
> > already marked for reclaim+writeback and so will trigger a wait upon the
> > writeback inside direct reclaim, greatly improving the success rate of
> > direct reclaim on i915 objects.
> > 
> > The corollary is that we may start a slow swap on opportunistic
> > mempressure from the likes of the compaction + migration kthreads. This
> > is limited by those threads only being allowed to shrink idle pages, but
> > also that if we reactivate the page before it is swapped out by gpu
> > activity, we only page the cost of repinning the page. The cost is most
> > felt when an object is reused after mempressure, which hopefully
> > excludes the latency sensitive tasks (as we are just extending the
> > impact of swap thrashing to them).
> > 
> > Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx>
> > Cc: Mika Kuoppala <mika.kuoppala@xxxxxxxxxxxxxxx>
> > Cc: Joonas Lahtinen <joonas.lahtinen@xxxxxxxxxxxxxxx>
> > Cc: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx>
> > Cc: Matthew Auld <matthew.auld@xxxxxxxxx>
> > Cc: Daniel Vetter <daniel.vetter@xxxxxxxx>
> > Cc: Michal Hocko <mhocko@xxxxxxxx>
> 
> <SNIP>
> 
> > +static void __start_writeback(struct drm_i915_gem_object *obj)
> > +{
> 
> <SNIP>
> 
> > +     /* Force any other users of this object to refault */
> > +     mapping = obj->base.filp->f_mapping;
> > +     unmap_mapping_range(mapping, 0, (loff_t)-1, 0);
> > +
> > +     /* Begin writeback on each dirty page */
> > +     for (i = 0; i < obj->base.size >> PAGE_SHIFT; i++) {
> > +             struct page *page;
> > +
> > +             page = find_lock_entry(mapping, i);
> > +             if (!page || radix_tree_exceptional_entry(page))
> > +                     continue;
> > +
> > +             if (!page_mapped(page) && clear_page_dirty_for_io(page)) {
> > +                     int ret;
> > +
> > +                     SetPageReclaim(page);
> > +                     ret = mapping->a_ops->writepage(page, &wbc);
> > +                     if (!PageWriteback(page))
> > +                             ClearPageReclaim(page);
> > +                     if (!ret)
> > +                             goto put;
> > +             }
> > +             unlock_page(page);
> > +put:
> > +             put_page(page);
> > +     }
> 
> Apart from this part (which should probably be a helper function
> outside of i915), the code is:
> 
> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@xxxxxxxxxxxxxxx>

Thanks for the review, I've pushed the fix plus simple patches, leaving
this one for more feedback.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux