On Wed, Feb 17, 2021 at 3:09 AM Muchun Song <songmuchun@xxxxxxxxxxxxx> wrote: > > When pages are swapped in, the VM may retain the swap copy to avoid > repeated writes in the future. It's also retained if shared pages are > faulted back in some processes, but not in others. During that time we > have an in-memory copy of the page, as well as an on-swap copy. Cgroup1 > and cgroup2 handle these overlapping lifetimes slightly differently > due to the nature of how they account memory and swap: > > Cgroup1 has a unified memory+swap counter that tracks a data page > regardless whether it's in-core or swapped out. On swapin, we transfer > the charge from the swap entry to the newly allocated swapcache page, > even though the swap entry might stick around for a while. That's why > we have a mem_cgroup_uncharge_swap() call inside mem_cgroup_charge(). > > Cgroup2 tracks memory and swap as separate, independent resources and > thus has split memory and swap counters. On swapin, we charge the > newly allocated swapcache page as memory, while the swap slot in turn > must remain charged to the swap counter as long as its allocated too. > > The cgroup2 logic was broken by commit 2d1c498072de ("mm: memcontrol: > make swap tracking an integral part of memory control"), because it > accidentally removed the do_memsw_account() check in the branch inside > mem_cgroup_uncharge() that was supposed to tell the difference between > the charge transfer in cgroup1 and the separate counters in cgroup2. > > As a result, cgroup2 currently undercounts retained swap to varying > degrees: swap slots are cached up to 50% of the configured limit or > total available swap space; partially faulted back shared pages are > only limited by physical capacity. This in turn allows cgroups to > significantly overconsume their alloted swap space. > > Add the do_memsw_account() check back to fix this problem. > > Fixes: 2d1c498072de ("mm: memcontrol: make swap tracking an integral part of memory control") > Signed-off-by: Muchun Song <songmuchun@xxxxxxxxxxxxx> > Acked-by: Johannes Weiner <hannes@xxxxxxxxxxx> > Reviewed-by: Shakeel Butt <shakeelb@xxxxxxxxxx> > Acked-by: Michal Hocko <mhocko@xxxxxxxx> > Cc: stable@xxxxxxxxxxxxxxx # 5.8+ > --- > v2: > - update commit log and add a comment to the code. Very thanks to Johannes. > > mm/memcontrol.c | 14 +++++++++++++- > 1 file changed, 13 insertions(+), 1 deletion(-) > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index ed5cc78a8dbf..2efbb4f71d5f 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -6771,7 +6771,19 @@ int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask) > memcg_check_events(memcg, page); > local_irq_enable(); > > - if (PageSwapCache(page)) { > + /* > + * Cgroup1's unified memory+swap counter has been charged with the > + * new swapcache page, finish the transfer by uncharging the swap > + * slot. The swap slot would also get uncharged when it dies, but > + * it can stick around indefinitely and we'd count the page twice > + * the entire time. > + * > + * Cgroup2 has separate resource counters for memory and swap, > + * so this is a non-issue here. Memory and swap charge lifetimes > + * correspond 1:1 to page and swap slot lifetimes: we charge the > + * page to memory here, and uncharge swap when the slot is freed. > + */ > + if (!cgroup_subsys_on_dfl(memory_cgrp_subsys) && PageSwapCache(page)) { do_memsw_account() instead of !cgroup_subsys_on_dfl(memory_cgrp_subsys). > swp_entry_t entry = { .val = page_private(page) }; > /* > * The swap entry might not get freed for a long time, > -- > 2.11.0 >