On Wed, Aug 15, 2018 at 10:37:28AM -0700, Andy Lutomirski wrote: > > > > On Aug 15, 2018, at 10:32 AM, Shakeel Butt <shakeelb@xxxxxxxxxx> wrote: > > > >> On Wed, Aug 15, 2018 at 10:26 AM Roman Gushchin <guro@xxxxxx> wrote: > >> > >>> On Wed, Aug 15, 2018 at 10:12:42AM -0700, Andy Lutomirski wrote: > >>> > >>> > >>>>> On Aug 15, 2018, at 9:55 AM, Roman Gushchin <guro@xxxxxx> wrote: > >>>>> > >>>>>> On Wed, Aug 15, 2018 at 12:39:23PM -0400, Johannes Weiner wrote: > >>>>>> On Tue, Aug 14, 2018 at 05:36:19PM -0700, Roman Gushchin wrote: > >>>>>> @@ -224,9 +224,14 @@ static unsigned long *alloc_thread_stack_node(struct task_struct *tsk, int node) > >>>>>> return s->addr; > >>>>>> } > >>>>>> > >>>>>> + /* > >>>>>> + * Allocated stacks are cached and later reused by new threads, > >>>>>> + * so memcg accounting is performed manually on assigning/releasing > >>>>>> + * stacks to tasks. Drop __GFP_ACCOUNT. > >>>>>> + */ > >>>>>> stack = __vmalloc_node_range(THREAD_SIZE, THREAD_ALIGN, > >>>>>> VMALLOC_START, VMALLOC_END, > >>>>>> - THREADINFO_GFP, > >>>>>> + THREADINFO_GFP & ~__GFP_ACCOUNT, > >>>>>> PAGE_KERNEL, > >>>>>> 0, node, __builtin_return_address(0)); > >>>>>> > >>>>>> @@ -246,12 +251,41 @@ static unsigned long *alloc_thread_stack_node(struct task_struct *tsk, int node) > >>>>>> #endif > >>>>>> } > >>>>>> > >>>>>> +static void memcg_charge_kernel_stack(struct task_struct *tsk) > >>>>>> +{ > >>>>>> +#ifdef CONFIG_VMAP_STACK > >>>>>> + struct vm_struct *vm = task_stack_vm_area(tsk); > >>>>>> + > >>>>>> + if (vm) { > >>>>>> + int i; > >>>>>> + > >>>>>> + for (i = 0; i < THREAD_SIZE / PAGE_SIZE; i++) > >>>>>> + memcg_kmem_charge(vm->pages[i], __GFP_NOFAIL, > >>>>>> + compound_order(vm->pages[i])); > >>>>>> + > >>>>>> + /* All stack pages belong to the same memcg. */ > >>>>>> + mod_memcg_page_state(vm->pages[0], MEMCG_KERNEL_STACK_KB, > >>>>>> + THREAD_SIZE / 1024); > >>>>>> + } > >>>>>> +#endif > >>>>>> +} > >>>>> > >>>>> Before this change, the memory limit can fail the fork, but afterwards > >>>>> fork() can grow memory consumption unimpeded by the cgroup settings. > >>>>> > >>>>> Can we continue to use try_charge() here and fail the fork? > >>>> > >>>> We can, but I'm not convinced we should. > >>>> > >>>> Kernel stack is relatively small, and it's already allocated at this point. > >>>> So IMO exceeding the memcg limit for 1-2 pages isn't worse than > >>>> adding complexity and handle this case (e.g. uncharge partially > >>>> charged stack). Do you have an example, when it does matter? > >>> > >>> What bounds it to just a few pages? Couldn’t there be lots of forks in flight that all hit this path? It’s unlikely, and there are surely easier DoS vectors, but still. > >> > >> Because any following memcg-aware allocation will fail. > >> There is also the pid cgroup controlled which can be used to limit the number > >> of forks. > >> > >> Anyway, I'm ok to handle the this case and fail fork, > >> if you think it does matter. > > > > Roman, before adding more changes do benchmark this. Maybe disabling > > the stack caching for CONFIG_MEMCG is much cleaner. > > > > > > Unless memcg accounting is colossally slow, the caching should be left on. vmalloc() isn’t inherently slow, but vfree() is, since we need to do a global broadcast TLB flush after enough vfree() calls. It's not. BTW, is the test, which you used to measure the performance gains of stack caching, available publicly? Thanks!