If enabled in config, alloc_rt_sched_group() is called for each new cpu cgroup and allocates a huge (~1700 bytes) percpu struct rt_rq. This significantly exceeds the size of the percpu allocation in the common part of cgroup creation. Memory allocated during new cpu cgroup creation (with enabled RT_GROUP_SCHED): common part: ~11Kb + 318 bytes percpu cpu cgroup: ~2.5Kb + ~2800 bytes percpu Accounting for this memory helps to avoid misuse inside memcg-limited containers. Signed-off-by: Vasily Averin <vvs@xxxxxxxxxx> Acked-by: Shakeel Butt <shakeelb@xxxxxxxxxx> Acked-by: Roman Gushchin <roman.gushchin@xxxxxxxxx> Acked-by: Muchun Song <songmuchun@xxxxxxxxxxxxx> --- kernel/sched/rt.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index 8c9ed9664840..44a8fc096e33 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -256,7 +256,7 @@ int alloc_rt_sched_group(struct task_group *tg, struct task_group *parent) for_each_possible_cpu(i) { rt_rq = kzalloc_node(sizeof(struct rt_rq), - GFP_KERNEL, cpu_to_node(i)); + GFP_KERNEL_ACCOUNT, cpu_to_node(i)); if (!rt_rq) goto err; -- 2.36.1