RFC as I'm looking for comments. For long running compute, it can be beneficial to partition the GPU memory between cgroups, so each cgroup can use its maximum amount of memory without interfering with other scheduled jobs. Done properly, this can alleviate the need for eviction, which might result in a job being terminated if the GPU doesn't support mid-thread preemption or recoverable page faults. This is done by adding a bunch of knobs to cgroup: drm.capacity: Shows maximum capacity of each resource region. drm.max: Display or limit max amount of memory. drm.current: Current amount of memory in use. TTM has not been made cgroup aware yet, so instead of evicting from the current cgroup to stay within the cgroup limits, it simply returns the error -ENOSPC to userspace. I've used Tvrtko's cgroup controller series as a base, but it implemented scheduling weight, not memory accounting, so I only ended up keeping the base patch. Xe is not upstream yet, so the driver specific patch will only apply on https://gitlab.freedesktop.org/drm/xe/kernel Maarten Lankhorst (3): drm/cgroup: Add memory accounting to DRM cgroup drm/ttm: Handle -EAGAIN in ttm_resource_alloc as -ENOSPC. drm/xe: Add support for the drm cgroup Tvrtko Ursulin (1): cgroup: Add the DRM cgroup controller Documentation/admin-guide/cgroup-v2.rst | 46 ++ Documentation/gpu/drm-compute.rst | 54 ++ drivers/gpu/drm/ttm/ttm_bo.c | 4 +- drivers/gpu/drm/xe/xe_device.c | 4 + drivers/gpu/drm/xe/xe_device_types.h | 4 + drivers/gpu/drm/xe/xe_ttm_vram_mgr.c | 21 +- drivers/gpu/drm/xe/xe_ttm_vram_mgr_types.h | 5 + include/linux/cgroup_drm.h | 90 ++++ include/linux/cgroup_subsys.h | 4 + init/Kconfig | 7 + kernel/cgroup/Makefile | 1 + kernel/cgroup/drm.c | 557 +++++++++++++++++++++ 12 files changed, 794 insertions(+), 3 deletions(-) create mode 100644 Documentation/gpu/drm-compute.rst create mode 100644 include/linux/cgroup_drm.h create mode 100644 kernel/cgroup/drm.c -- 2.34.1