Re: [PATCH] drm/i915/gtt: Pull global wc page stash under its own locking

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Quoting Tvrtko Ursulin (2018-07-04 13:48:18)
> 
> On 04/07/2018 12:39, Chris Wilson wrote:
> > Currently, the wc-stash used for providing flushed WC pages ready for
> > constructing the page directories is assumed to be protected by the
> > struct_mutex. However, we want to remove this global lock and so must
> > install a replacement global lock for accessing the global wc-stash (the
> > per-vm stash continues to be guarded by the vm).
> > 
> > We need to push ahead on this patch due to an oversight in hastily
> > removing the struct_mutex guard around the igt_ppgtt_alloc selftest. No
> > matter, it will prove very useful (i.e. will be required) in the near
> > future.
> > 
> > Fixes: 1f6f00238abf ("drm/i915/selftests: Drop struct_mutex around lowlevel pggtt allocation")
> > Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx>
> > Cc: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx>
> > ---
> >   drivers/gpu/drm/i915/i915_drv.h     |  5 ++
> >   drivers/gpu/drm/i915/i915_gem_gtt.c | 83 ++++++++++++++++-------------
> >   2 files changed, 52 insertions(+), 36 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> > index 2cefe4c30f88..696c0b36f81e 100644
> > --- a/drivers/gpu/drm/i915/i915_drv.h
> > +++ b/drivers/gpu/drm/i915/i915_drv.h
> > @@ -954,6 +954,11 @@ struct i915_gem_mm {
> >        */
> >       struct pagevec wc_stash;
> >   
> > +     /**
> > +      * Lock for the small stash of WC pages.
> > +      */
> > +     spinlock_t wc_lock;
> > +
> >       /**
> >        * tmpfs instance used for shmem backed objects
> >        */
> > diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
> > index c6aa761ca085..e0e89e3ae43b 100644
> > --- a/drivers/gpu/drm/i915/i915_gem_gtt.c
> > +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
> > @@ -377,25 +377,28 @@ static gen6_pte_t iris_pte_encode(dma_addr_t addr,
> >   
> >   static struct page *vm_alloc_page(struct i915_address_space *vm, gfp_t gfp)
> >   {
> > -     struct pagevec *pvec = &vm->free_pages;
> > -     struct pagevec stash;
> > +     struct pagevec *stash = &vm->free_pages;
> > +     struct pagevec *pvec;
> > +     struct page *page;
> >   
> >       if (I915_SELFTEST_ONLY(should_fail(&vm->fault_attr, 1)))
> >               i915_gem_shrink_all(vm->i915);
> >   
> > -     if (likely(pvec->nr))
> > -             return pvec->pages[--pvec->nr];
> > +     if (likely(stash->nr))
> > +             return stash->pages[--stash->nr];
> 
> This is still covered by the mutex?

It's covered by the i915_address_space guard (we only allow thread to
allocate/remove a range in the drm_mm/vm). In this case, that happens
to be still the struct_mutex, just expressed elsewhere.

> >       if (!vm->pt_kmap_wc)
> >               return alloc_page(gfp);
> >   
> > -     /* A placeholder for a specific mutex to guard the WC stash */
> > -     lockdep_assert_held(&vm->i915->drm.struct_mutex);
> > -
> >       /* Look in our global stash of WC pages... */
> >       pvec = &vm->i915->mm.wc_stash;
> > +     page = NULL;
> > +     spin_lock(&vm->i915->mm.wc_lock);
> >       if (likely(pvec->nr))
> > -             return pvec->pages[--pvec->nr];
> > +             page = pvec->pages[--pvec->nr];
> > +     spin_unlock(&vm->i915->mm.wc_lock);
> > +     if (page)
> > +             return page;
> >   
> >       /*
> >        * Otherwise batch allocate pages to amoritize cost of set_pages_wc.
> > @@ -405,7 +408,6 @@ static struct page *vm_alloc_page(struct i915_address_space *vm, gfp_t gfp)
> >        * So we add our WB pages into a temporary pvec on the stack and merge
> >        * them into the WC stash after all the allocations are complete.
> >        */
> > -     pagevec_init(&stash);
> >       do {
> >               struct page *page;
> >   
> > @@ -413,28 +415,30 @@ static struct page *vm_alloc_page(struct i915_address_space *vm, gfp_t gfp)
> >               if (unlikely(!page))
> >                       break;
> >   
> > -             stash.pages[stash.nr++] = page;
> > -     } while (stash.nr < pagevec_space(pvec));
> > +             stash->pages[stash->nr++] = page;
> > +     } while (pagevec_space(stash));
> 
> Comment is talking about a temporary stack stash but now that's not the 
> case any more. As minimum comment needs updating, but I don't understand 
> ATM if the new approach is safe, or in other words what was going wrong 
> before. If there is another path to the stash then stash->nr++ is 
> obviously not safe.

It's a temporary stash inside the owned vm pagevec.
> 
> >   
> > -     if (stash.nr) {
> > -             int nr = min_t(int, stash.nr, pagevec_space(pvec));
> > -             struct page **pages = stash.pages + stash.nr - nr;
> > +     if (stash->nr && !set_pages_array_wc(stash->pages, stash->nr)) {
> 
> Previously the test was for pages which obviously the local thread 
> owned. Now I am not sure if the condition says.

Still owned by the local thread.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux