Currently THP deferred split shrinker is not memcg aware, this may cause premature OOM with some configuration. For example the below test would run into premature OOM easily: $ cgcreate -g memory:thp $ echo 4G > /sys/fs/cgroup/memory/thp/memory/limit_in_bytes $ cgexec -g memory:thp transhuge-stress 4000 transhuge-stress comes from kernel selftest. It is easy to hit OOM, but there are still a lot THP on the deferred split queue, memcg direct reclaim can't touch them since the deferred split shrinker is not memcg aware. Convert deferred split shrinker memcg aware by introducing per memcg deferred split queue. The THP should be on either per node or per memcg deferred split queue if it belongs to a memcg. When the page is immigrated to the other memcg, it will be immigrated to the target memcg's deferred split queue too. Reuse the second tail page's deferred_list for per memcg list since the same THP can't be on multiple deferred split queues. Make deferred split shrinker not depend on memcg kmem since it is not slab. It doesn’t make sense to not shrink THP even though memcg kmem is disabled. With the above change the test demonstrated above doesn’t trigger OOM even though with cgroup.memory=nokmem. Changelog: v5: * Fixed the issue reported by Qian Cai, folded the fix in. * Squashed build fix patches in. v4: * Replace list_del() to list_del_init() per Andrew. * Fixed the build failure for different kconfig combo and tested the below combo: MEMCG + TRANSPARENT_HUGEPAGE !MEMCG + TRANSPARENT_HUGEPAGE MEMCG + !TRANSPARENT_HUGEPAGE !MEMCG + !TRANSPARENT_HUGEPAGE * Added Acked-by from Kirill Shutemov. v3: * Adopted the suggestion from Kirill Shutemov to move mem_cgroup_uncharge() out of __page_cache_release() in order to handle THP free properly. * Adjusted the sequence of the patches per Kirill Shutemov. Dropped the patch 3/4 in v2. * Moved enqueuing THP onto "to" memcg deferred split queue after page->mem_cgroup is changed in memcg account move per Kirill Tkhai. v2: * Adopted the suggestion from Krill Shutemov to extract deferred split fields into a struct to reduce code duplication (patch 1/4). With this change, the lines of change is shrunk down to 198 from 278. * Removed memcg_deferred_list. Use deferred_list for both global and memcg. With the code deduplication, it doesn't make too much sense to keep it. Kirill Tkhai also suggested so. * Fixed typo for SHRINKER_NONSLAB. Yang Shi (4): mm: thp: extract split_queue_* into a struct mm: move mem_cgroup_uncharge out of __page_cache_release() mm: shrinker: make shrinker not depend on memcg kmem mm: thp: make deferred split shrinker memcg aware include/linux/huge_mm.h | 9 ++++++ include/linux/memcontrol.h | 23 +++++++++----- include/linux/mm_types.h | 1 + include/linux/mmzone.h | 12 ++++++-- include/linux/shrinker.h | 3 +- mm/huge_memory.c | 111 ++++++++++++++++++++++++++++++++++++++++++++++++++---------------- mm/memcontrol.c | 33 +++++++++++++++----- mm/page_alloc.c | 9 ++++-- mm/swap.c | 2 +- mm/vmscan.c | 66 +++++++++++++++++++-------------------- 10 files changed, 186 insertions(+), 83 deletions(-)