Buffers created by GPU driver could be huge (often several MB and even hundred or thousand MB). Some GPU driver call drm_gem_get_pages() which uses shmem to allocate these buffers which will charge memcg already, while some GPU driver like amdgpu use TTM which just allocate these system memory backed buffers with alloc_pages() so won't charge memcg currently. Not like pure kernel memory, GPU buffer need to be mapped to user space for user filling data and command then let GPU hardware consume these buffers. So it is not proper to use memcg kmem by adding __GFP_ACCOUNT to alloc_pages gfp flags. Another reason is back memory of GPU buffer may be allocated latter after the buffer object is created, and even in other processes. So we need to record the memcg when buffer object creation, then charge it latter when needed. TTM will use a page pool acting as a cache for write-combine/no-cache pages. So adding new GFP flags for alloc_pages also does not work. Qiang Yu (3): mm: memcontrol: add mem_cgroup_(un)charge_drvmem mm: memcontrol: record driver memory statistics drm/ttm: support memcg for ttm_tt drivers/gpu/drm/ttm/ttm_bo.c | 10 +++++ drivers/gpu/drm/ttm/ttm_page_alloc.c | 18 ++++++++- drivers/gpu/drm/ttm/ttm_tt.c | 3 ++ include/drm/ttm/ttm_bo_api.h | 5 +++ include/drm/ttm/ttm_tt.h | 4 ++ include/linux/memcontrol.h | 22 +++++++++++ mm/memcontrol.c | 58 ++++++++++++++++++++++++++++ 7 files changed, 119 insertions(+), 1 deletion(-) -- 2.17.1 _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel