Re: [RFC PATCH 2/6] drm/cgroup: Add memory accounting DRM cgroup

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Thanks for working on this!

On Thu, Jun 27, 2024 at 05:47:21PM GMT, Maarten Lankhorst wrote:
> The initial version was based roughly on the rdma and misc cgroup
> controllers, with a lot of the accounting code borrowed from rdma.
> 
> The current version is a complete rewrite with page counter; it uses
> the same min/low/max semantics as the memory cgroup as a result.
> 
> There's a small mismatch as TTM uses u64, and page_counter long pages.
> In practice it's not a problem. 32-bits systems don't really come with
> >=4GB cards and as long as we're consistently wrong with units, it's
> fine. The device page size may not be in the same units as kernel page
> size, and each region might also have a different page size (VRAM vs GART
> for example).
> 
> The interface is simple:
> - populate drmcgroup_device->regions[..] name and size for each active
>   region, set num_regions accordingly.
> - Call drm(m)cg_register_device()
> - Use drmcg_try_charge to check if you can allocate a chunk of memory,
>   use drmcg_uncharge when freeing it. This may return an error code,
>   or -EAGAIN when the cgroup limit is reached. In that case a reference
>   to the limiting pool is returned.
> - The limiting cs can be used as compare function for
>   drmcs_evict_valuable.
> - After having evicted enough, drop reference to limiting cs with
>   drmcs_pool_put.
> 
> This API allows you to limit device resources with cgroups.
> You can see the supported cards in /sys/fs/cgroup/drm.capacity
> You need to echo +drm to cgroup.subtree_control, and then you can
> partition memory.
> 
> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@xxxxxxxxxxxxxxx>
> Co-developed-by: Friedrich Vock <friedrich.vock@xxxxxx>

I'm sorry, I should have wrote minutes on the discussion we had with TJ
and Tvrtko the other day.

We're all very interested in making this happen, but doing a "DRM"
cgroup doesn't look like the right path to us.

Indeed, we have a significant number of drivers that won't have a
dedicated memory but will depend on DMA allocations one way or the
other, and those pools are shared between multiple frameworks (DRM,
V4L2, DMA-Buf Heaps, at least).

This was also pointed out by Sima some time ago here:
https://lore.kernel.org/amd-gfx/YCVOl8%2F87bqRSQei@phenom.ffwll.local/

So we'll want that cgroup subsystem to be cross-framework. We settled on
a "device" cgroup during the discussion, but I'm sure we'll have plenty
of bikeshedding.

The other thing we agreed on, based on the feedback TJ got on the last
iterations of his series was to go for memcg for drivers not using DMA
allocations.

It's the part where I expect some discussion there too :)

So we went back to a previous version of TJ's work, and I've started to
work on:

  - Integration of the cgroup in the GEM DMA and GEM VRAM helpers (this
    works on tidss right now)

  - Integration of all heaps into that cgroup but the system one
    (working on this at the moment)

  - Integration into v4l2 (next on my list)

Maxime

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux