An interesting issue cropped with making the pagetables be allocated and freed concurrently (i.e. removing their grandeous struct_mutex guard) was that we would overflow the page stash. This happens when we have multiple allocators grabbing WC pages such that we fill the vm's local page stash and then when we free another page, the page stash is already full and we overflow. The fix is quite simple to check for a full page stash before adding another. This results in us keeping a vm local page stash around for much longer which is both a blessing and a curse. Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> Cc: Matthew Auld <matthew.auld@xxxxxxxxx> Cc: Joonas Lahtinen <joonas.lahtinen@xxxxxxxxxxxxxxx> --- drivers/gpu/drm/i915/i915_gem_gtt.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c index 7496cce0d798..2d7a968d4fd5 100644 --- a/drivers/gpu/drm/i915/i915_gem_gtt.c +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c @@ -341,11 +341,11 @@ static struct page *stash_pop_page(struct pagestash *stash) static void stash_push_pagevec(struct pagestash *stash, struct pagevec *pvec) { - int nr; + unsigned int nr; spin_lock_nested(&stash->lock, SINGLE_DEPTH_NESTING); - nr = min_t(int, pvec->nr, pagevec_space(&stash->pvec)); + nr = min_t(typeof(nr), pvec->nr, pagevec_space(&stash->pvec)); memcpy(stash->pvec.pages + stash->pvec.nr, pvec->pages + pvec->nr - nr, sizeof(pvec->pages[0]) * nr); @@ -399,7 +399,8 @@ static struct page *vm_alloc_page(struct i915_address_space *vm, gfp_t gfp) page = stack.pages[--stack.nr]; /* Merge spare WC pages to the global stash */ - stash_push_pagevec(&vm->i915->mm.wc_stash, &stack); + if (stack.nr) + stash_push_pagevec(&vm->i915->mm.wc_stash, &stack); /* Push any surplus WC pages onto the local VM stash */ if (stack.nr) @@ -469,8 +470,9 @@ static void vm_free_page(struct i915_address_space *vm, struct page *page) */ might_sleep(); spin_lock(&vm->free_pages.lock); - if (!pagevec_add(&vm->free_pages.pvec, page)) + if (!pagevec_space(&vm->free_pages.pvec)) vm_free_pages_release(vm, false); + pagevec_add(&vm->free_pages.pvec, page); spin_unlock(&vm->free_pages.lock); } -- 2.20.1 _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx