Re: [PATCH] mm: Skip opportunistic reclaim for dma pinned pages

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Quoting Jason Gunthorpe (2020-06-24 20:21:16)
> On Wed, Jun 24, 2020 at 08:14:17PM +0100, Chris Wilson wrote:
> > A general rule of thumb is that shrinkers should be fast and effective.
> > They are called from direct reclaim at the most incovenient of times when
> > the caller is waiting for a page. If we attempt to reclaim a page being
> > pinned for active dma [pin_user_pages()], we will incur far greater
> > latency than a normal anonymous page mapped multiple times. Worse the
> > page may be in use indefinitely by the HW and unable to be reclaimed
> > in a timely manner.
> 
> A pinned page can't be migrated, discarded or swapped by definition -
> it would cause data corruption.
> 
> So, how do things even get here and/or work today at all? I think the
> explanation is missing something important.

[<0>] userptr_mn_invalidate_range_start+0xa7/0x170 [i915]
[<0>] __mmu_notifier_invalidate_range_start+0x110/0x150
[<0>] try_to_unmap_one+0x790/0x870
[<0>] rmap_walk_file+0xe9/0x230
[<0>] rmap_walk+0x27/0x30
[<0>] try_to_unmap+0x89/0xc0
[<0>] shrink_page_list+0x88a/0xf50
[<0>] shrink_inactive_list+0x137/0x2f0
[<0>] shrink_lruvec+0x4ec/0x5f0
[<0>] shrink_node+0x15d/0x410
[<0>] try_to_free_pages+0x17f/0x430
[<0>] __alloc_pages_slowpath+0x2ab/0xcc0
[<0>] __alloc_pages_nodemask+0x1ad/0x1e0
[<0>] new_slab+0x2d8/0x310
[<0>] ___slab_alloc.constprop.0+0x288/0x520
[<0>] __slab_alloc.constprop.0+0xd/0x20
[<0>] kmem_cache_alloc_trace+0x1ad/0x1c0

and that hits an active pin_user_pages object.

Is there some information that would help in particular?
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx



[Index of Archives]     [AMD Graphics]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux