Quoting Jason Gunthorpe (2020-06-24 17:50:57) > On Wed, Jun 24, 2020 at 03:37:32PM +0100, Chris Wilson wrote: > > Quoting Jason Gunthorpe (2020-06-24 15:25:44) > > > On Wed, Jun 24, 2020 at 03:21:49PM +0100, Chris Wilson wrote: > > > > Quoting Jason Gunthorpe (2020-06-24 15:16:04) > > > > > On Wed, Jun 24, 2020 at 03:12:42PM +0100, Chris Wilson wrote: > > > > > > Quoting Jason Gunthorpe (2020-06-24 13:39:10) > > > > > > > On Wed, Jun 24, 2020 at 01:21:03PM +0100, Chris Wilson wrote: > > > > > > > > Quoting Jason Gunthorpe (2020-06-24 13:10:53) > > > > > > > > > On Wed, Jun 24, 2020 at 09:02:47AM +0100, Chris Wilson wrote: > > > > > > > > > > When direct reclaim enters the shrinker and tries to reclaim pages, it > > > > > > > > > > has to opportunitically unmap them [try_to_unmap_one]. For direct > > > > > > > > > > reclaim, the calling context is unknown and may include attempts to > > > > > > > > > > unmap one page of a dma object while attempting to allocate more pages > > > > > > > > > > for that object. Pass the information along that we are inside an > > > > > > > > > > opportunistic unmap that can allow that page to remain referenced and > > > > > > > > > > mapped, and let the callback opt in to avoiding a recursive wait. > > > > > > > > > > > > > > > > > > i915 should already not be holding locks shared with the notifiers > > > > > > > > > across allocations that can trigger reclaim. This is already required > > > > > > > > > to use notifiers correctly anyhow - why do we need something in the > > > > > > > > > notifiers? > > > > > > > > > > > > > > > > for (n = 0; n < num_pages; n++) > > > > > > > > pin_user_page() > > > > > > > > > > > > > > > > may call try_to_unmap_page from the lru shrinker for [0, n-1]. > > > > > > > > > > > > > > Yes, of course you can't hold any locks that intersect with notifiers > > > > > > > across pin_user_page()/get_user_page() > > > > > > > > > > > > What lock though? It's just the page refcount, shrinker asks us to drop > > > > > > it [via mmu], we reply we would like to keep using that page as freeing > > > > > > it for the current allocation is "robbing Peter to pay Paul". > > > > > > > > > > Maybe I'm unclear what this series is actually trying to fix? > > > > > > > > > > You said "avoiding a recursive wait" which sounds like some locking > > > > > deadlock to me. > > > > > > > > It's the shrinker being called while we are allocating for/on behalf of > > > > the object. As we are actively using the object, we don't want to free > > > > it -- the partial object allocation being the clearest, if the object > > > > consists of 2 pages, trying to free page 0 in order to allocate page 1 > > > > has to fail (and the shrinker should find another candidate to reclaim, > > > > or fail the allocation). > > > > > > mmu notifiers are not for influencing policy of the mm. > > > > It's policy is "this may fail" regardless of the mmu notifier at this > > point. That is not changed. > > MMU notifiers are for tracking updates, they are not allowed to fail. > The one slightly weird case of non-blocking is the only exception. > > > Your suggestion is that we move the pages to the unevictable mapping so > > that the shrinker LRU is never invoked on pages we have grabbed with > > pin_user_page. Does that work with the rest of the mmu notifiers? > > That is beyond what I'm familiar with - but generally - if you want to > influence decisions the MM is making then it needs to be at the > front of the process and not inside notifiers. > > So what you describe seems broadly appropriate to me. Sadly, it's a mlock_vma_page problem all over again. > I'm still a little unclear on what you are trying to fix - pinned > pages are definitely not freed, do you have some case where pages > which are pinned are being cleaned out from the MM despite being > pinned? Sounds a bit strange, maybe that is worth adressing directly? It suffices to say that pin_user_pages does not prevent try_to_unmap_one from trying to revoke the page. But we could perhaps slip a page_maybe_dma_pinned() in around there and see what happens. -Chris _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx