On Mon, Aug 12, 2024 at 5:22 AM <yangge1116@xxxxxxx> wrote: > > From: yangge <yangge1116@xxxxxxx> > > Whoever passes a folio to __folio_batch_add_and_move() must hold > a reference, otherwise something else would already be messed up. > If the folio is referenced, it will not be freed elsewhere, so we > can safely clear the folio's lru flag. As discussed with David > in [1], we should take the reference after testing the LRU flag, > not before. > > Link: https://lore.kernel.org/lkml/d41865b4-d6fa-49ba-890a-921eefad27dd@xxxxxxxxxx/ [1] > Signed-off-by: yangge <yangge1116@xxxxxxx> > Acked-by: David Hildenbrand <david@xxxxxxxxxx> > --- > mm/swap.c | 9 ++++----- > 1 file changed, 4 insertions(+), 5 deletions(-) > > V2: > Add sanity check suggested by David > > diff --git a/mm/swap.c b/mm/swap.c > index 67a2467..c048659 100644 > --- a/mm/swap.c > +++ b/mm/swap.c > @@ -226,12 +226,11 @@ static void __folio_batch_add_and_move(struct folio_batch __percpu *fbatch, > { > unsigned long flags; > > - folio_get(folio); > - > - if (on_lru && !folio_test_clear_lru(folio)) { > - folio_put(folio); > + if (on_lru && !folio_test_clear_lru(folio)) > return; > - } > + > + VM_WARN_ON_ONCE(!folio_ref_count(folio)); > + folio_get(folio); No need to check folio_ref_count() here, because folio_get() already does it with a better check folio_ref_zero_or_close_to_overflow().