Re: pages pinned for BO lifetime and security

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 22.07.20 um 09:46 schrieb Daniel Vetter:
On Wed, Jul 22, 2020 at 9:19 AM Christian König
<christian.koenig@xxxxxxx> wrote:
Am 22.07.20 um 02:22 schrieb Gurchetan Singh:
Of the desktop GPU drivers, i915's shrinker certainly supports purging
to swap.  TTM is a bit hard to follow.  I can't really tell if amdgpu
or nouveau supports that.  virtio-gpu is more commonly found on
systems with swaps so I think it should follow the desktop practices?

What we do at least in the amdgpu, radeon, i915 and nouveau is to only allow it for scanout and that in turn is limited by the physical number of CRTCs on the board.
Somewhat aside, but I'm not sure the ttm shrinker really works like
that. I think there's two parts:
1. kernel thread which takes buffers and unbinds them when we're over
the ttm global limit. This is the ttm_shrink_work stuff, and it only
shrinks if the zone is over a hard limit. Below that it just leaves
buffers pinned.

2. Actual core mm shrinker, which releases buffers held in cache by
ttm_page_alloc_dma.c. But that only happens when buffers have been
unbound by the first thread, so anything below those hard limits is
not shrinkable. And iirc those hard limits are like half of system
memory or so (last time I looked through this stuff at least).

No idea why exactly things are like they are, since the first thread
already does a dma_resv_trylock, and that's enough to avoid locking
inversions when being called from 2. Or well, should be at least, for
reasonable driver design.

Yes, that's currently a bit messy in TTM and not such a good design over all.

The only other thing I'm seeing is the global lru, but that could be
fixed by having a per-device core mm shrinker instance which directly
shrinks the per-device lru. And then we just globally balance like
with all shrinkers through the core mm "shrink everyone equally"
approach. You can even keep the separate page alloc shrinker, since
core mm always loops over all shrinkers - we're not the only ones
where shrinking one cache makes more memory available for another
cache to shrink, e.g. you can't throw out an inode without first
throwing out all the dentry pointing at them.

My plan is to replace all this with an explicit SWAP domain for buffer objects.

One idea was to make the SYSTEM and SWAP domain global and express all this with transits between the different domains. But having one shrinker per device sounds like an even better idea now.

Another problem would be allocating memory while holding per-device
lru locks (since trylock on such a global lock in shrinkers is a
really bad idea, we know that from all the dev->struct_mutex lolz in
i915). But for ttm that's not a problem since all lru are spinlock, so
only GFP_ATOMIC allowed anyway, hence no problem.

Yes, exactly.

Christian.


Adding Thomas for this ttm tangent.
-Daniel

_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux