On Thu, 24 Oct 2024, Yang Shi wrote: > On Wed, Oct 23, 2024 at 9:13 PM Hugh Dickins <hughd@xxxxxxxxxx> wrote: > > > > That goes back to 5.4 commit 87eaceb3faa5 ("mm: thp: make deferred split > > shrinker memcg aware"): which included a check on swapcache before adding > > to deferred queue (which can now be removed), but no check on deferred > > queue before adding THP to swapcache (maybe other circumstances prevented > > it at that time, but not now). > > If I remember correctly, THP just can be added to deferred list when > there is no PMD map before mTHP swapout, so shrink_page_list() did > check THP's compound_mapcount (called _entire_mapcount now) before > adding it to swap cache. > > Now the code just checked whether the large folio is on deferred list or not. I've continued to find it hard to think about, hard to be convinced by that sequence of checks, without an actual explicit _deferred_list check. David has brilliantly come up with the failed THP migration example; and I think now perhaps 5.8's 5503fbf2b0b8 ("khugepaged: allow to collapse PTE-mapped compound pages") introduced another way? But I certainly need to reword that wagging finger pointing to your commit: these are much more exceptional cases than I was thinking there. I have this evening tried running swapping load on 5.10 and 6.6 and 6.11, each with just a BUG_ON(!list_empty(the deferred list)) before resetting memcg in mem_cgroup_swapout() - it would of course be much easier to hit such a BUG_ON() than for the consequent wrong locking to be so unlucky as to actually result in list corruption. None of those BUG_ONs hit; though I was only running each for 1.5 hour, and looking at vmstats at the end, see the were really not exercising deferred split very much at all. I'd been hoping for an immediate hit (as on 6.12-rc) to confirm my doubt, but no. That doesn't *prove* you're right, but (excepting David's and my weird cases) I bet you are right. > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > index 4b21a368b4e2..57f64b5d0004 100644 > > --- a/mm/page_alloc.c > > +++ b/mm/page_alloc.c > > @@ -2681,7 +2681,9 @@ void free_unref_folios(struct folio_batch *folios) > > unsigned long pfn = folio_pfn(folio); > > unsigned int order = folio_order(folio); > > > > - folio_undo_large_rmappable(folio); > > + if (mem_cgroup_disabled()) > > + folio_unqueue_deferred_split(folio); > > This looks confusing. It looks all callsites of free_unref_folios() > have folio_unqueue_deferred_split() and memcg uncharge called before > it. If there is any problem, memcg uncharge should catch it. Did I > miss something? I don't understand what you're suggesting there. But David remarked on it too, so it seems that I do need at least to add some comment. I'd better re-examine the memcg/non-memcg forking paths again: but by strange coincidence (or suggestion?), I'm suddenly now too tired here, precisely where David stopped too. I'll have to come back to this tomorrow, sorry. Hugh