On Wed, Oct 12, 2022 at 09:04:59PM -0700, Wei Wang wrote: > On Wed, Oct 12, 2022 at 8:49 PM Jakub Kicinski <kuba@xxxxxxxxxx> wrote: > > > > On Wed, 12 Oct 2022 20:34:00 -0700 Wei Wang wrote: > > > > I pushed this little nugget to one affected machine via KLP: > > > > > > > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > > > > index 03ffbb255e60..c1ca369a1b77 100644 > > > > --- a/mm/memcontrol.c > > > > +++ b/mm/memcontrol.c > > > > @@ -7121,6 +7121,10 @@ bool mem_cgroup_charge_skmem(struct mem_cgroup *memcg, unsigned int nr_pages, > > > > return true; > > > > } > > > > > > > > + if (gfp_mask == GFP_NOWAIT) { > > > > + try_charge(memcg, gfp_mask|__GFP_NOFAIL, nr_pages); > > > > + refill_stock(memcg, nr_pages); > > > > + } > > > > return false; > > > > } > > > > > > > AFAICT, if you force charge by passing __GFP_NOFAIL to try_charge(), > > > you should return true to tell the caller that the nr_pages is > > > actually being charged. > > > > Ack - not sure what the best thing to do is, tho. Always pass NOFAIL > > in softirq? > > > > It's not clear to me yet why doing the charge/uncharge actually helps, > > perhaps try_to_free_mem_cgroup_pages() does more when NOFAIL is passed? > > > I am curious to know as well. > > > I'll do more digging tomorrow. > > > > > Although I am not very sure what refill_stock() does. Does that > > > "uncharge" those pages? > > > > I think so, I copied it from mem_cgroup_uncharge_skmem(). I think I understand why this issue start happening after this patch. The memcg charging happens in batches of 32 (64 nowadays) pages even if the charge request is less. The remaining pre-charge is cached in the per-cpu cache (or stock). With (GFP_NOWAIT | __GFP_NOFAIL), you let the memcg go over the limit without triggering oom-kill and then refill_stock just put the pre-charge in the per-cpu cache. So, the later allocation/charge succeed from the per-cpu cache even though memcg is over the limit. So, with this patch we no longer force charge and then uncharge on failure, so the later allocation/charge fail similarly. Regarding what is the right thing to do, IMHO, is to use GFP_ATOMIC instead of GFP_NOWAIT. If you see the following comment in try_charge_memcg(), we added this exception particularly for these kind of situations. ... /* * Memcg doesn't have a dedicated reserve for atomic * allocations. But like the global atomic pool, we need to * put the burden of reclaim on regular allocation requests * and let these go through as privileged allocations. */ if (!(gfp_mask & (__GFP_NOFAIL | __GFP_HIGH))) return -ENOMEM; ... Shakeel