On Thu, Jan 3, 2019 at 2:15 AM William Kucharski <william.kucharski@xxxxxxxxxx> wrote: > > > > > On Jan 2, 2019, at 8:14 PM, Shakeel Butt <shakeelb@xxxxxxxxxx> wrote: > > > > countersize = COUNTER_OFFSET(tmp.nentries) * nr_cpu_ids; > > - newinfo = vmalloc(sizeof(*newinfo) + countersize); > > + newinfo = __vmalloc(sizeof(*newinfo) + countersize, GFP_KERNEL_ACCOUNT, > > + PAGE_KERNEL); > > if (!newinfo) > > return -ENOMEM; > > > > if (countersize) > > memset(newinfo->counters, 0, countersize); > > > > - newinfo->entries = vmalloc(tmp.entries_size); > > + newinfo->entries = __vmalloc(tmp.entries_size, GFP_KERNEL_ACCOUNT, > > + PAGE_KERNEL); > > if (!newinfo->entries) { > > ret = -ENOMEM; > > goto free_newinfo; > > -- > > Just out of curiosity, what are the actual sizes of these areas in typical use > given __vmalloc() will be allocating by the page? > We don't really use this in production, so, I don't have a good idea of the size in the typical case. The size depends on the workload. The motivation behind this patch was the system OOM triggered by a syzbot running in a restricted memcg. Shakeel