On Tue 21-08-18 14:35:57, Roman Gushchin wrote: > If CONFIG_VMAP_STACK is set, kernel stacks are allocated > using __vmalloc_node_range() with __GFP_ACCOUNT. So kernel > stack pages are charged against corresponding memory cgroups > on allocation and uncharged on releasing them. > > The problem is that we do cache kernel stacks in small > per-cpu caches and do reuse them for new tasks, which can > belong to different memory cgroups. > > Each stack page still holds a reference to the original cgroup, > so the cgroup can't be released until the vmap area is released. > > To make this happen we need more than two subsequent exits > without forks in between on the current cpu, which makes it > very unlikely to happen. As a result, I saw a significant number > of dying cgroups (in theory, up to 2 * number_of_cpu + > number_of_tasks), which can't be released even by significant > memory pressure. > > As a cgroup structure can take a significant amount of memory > (first of all, per-cpu data like memcg statistics), it leads > to a noticeable waste of memory. > > Signed-off-by: Roman Gushchin <guro@xxxxxx> > Cc: Johannes Weiner <hannes@xxxxxxxxxxx> > Cc: Michal Hocko <mhocko@xxxxxxxxxx> > Cc: Andy Lutomirski <luto@xxxxxxxxxx> > Cc: Konstantin Khlebnikov <koct9i@xxxxxxxxx> > Cc: Tejun Heo <tj@xxxxxxxxxx> > Cc: Shakeel Butt <shakeelb@xxxxxxxxxx> Looks good to me. Two nits below. I am not sure stable tree backport is really needed but it would be nice to put Fixes: ac496bf48d97 ("fork: Optimize task creation by caching two thread stacks per CPU if CONFIG_VMAP_STACK=y") Acked-by: Michal Hocko <mhocko@xxxxxxxx> > @@ -248,9 +253,20 @@ static unsigned long *alloc_thread_stack_node(struct task_struct *tsk, int node) > static inline void free_thread_stack(struct task_struct *tsk) > { > #ifdef CONFIG_VMAP_STACK > - if (task_stack_vm_area(tsk)) { > + struct vm_struct *vm = task_stack_vm_area(tsk); > + > + if (vm) { > int i; > > + for (i = 0; i < THREAD_SIZE / PAGE_SIZE; i++) { > + mod_memcg_page_state(vm->pages[i], > + MEMCG_KERNEL_STACK_KB, > + -(int)(PAGE_SIZE / 1024)); > + > + memcg_kmem_uncharge(vm->pages[i], > + compound_order(vm->pages[i])); when do we have order > 0 here? Also I was wondering how come this doesn't blow up on partially charged stacks but both mod_memcg_page_state and memcg_kmem_uncharge check for page->mem_cgroup so this is safe. Maybe a comment would save people from scratching their heads. -- Michal Hocko SUSE Labs