On Wed, 13 Mar 2024, Sasha Levin wrote: > From: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> > > [ Upstream commit 2ac9e99f3b21b2864305fbfba4bae5913274c409 ] > > Rename numamigrate_isolate_page() to numamigrate_isolate_folio(), then > make it takes a folio and use folio API to save compound_head() calls. > > Link: https://lkml.kernel.org/r/20230913095131.2426871-4-wangkefeng.wang@xxxxxxxxxx > Signed-off-by: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> > Reviewed-by: Zi Yan <ziy@xxxxxxxxxx> > Cc: David Hildenbrand <david@xxxxxxxxxx> > Cc: "Huang, Ying" <ying.huang@xxxxxxxxx> > Cc: Hugh Dickins <hughd@xxxxxxxxxx> > Cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> > Cc: Mike Kravetz <mike.kravetz@xxxxxxxxxx> > Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> > Stable-dep-of: 2774f256e7c0 ("mm/vmscan: fix a bug calling wakeup_kswapd() with a wrong zone index") No it is not: that one is appropriate to include, this one is not. Hugh > Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx> > --- > mm/migrate.c | 20 ++++++++++---------- > 1 file changed, 10 insertions(+), 10 deletions(-) > > diff --git a/mm/migrate.c b/mm/migrate.c > index c9fabb960996f..e5f2f7243a659 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -2501,10 +2501,9 @@ static struct folio *alloc_misplaced_dst_folio(struct folio *src, > return __folio_alloc_node(gfp, order, nid); > } > > -static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page) > +static int numamigrate_isolate_folio(pg_data_t *pgdat, struct folio *folio) > { > - int nr_pages = thp_nr_pages(page); > - int order = compound_order(page); > + int nr_pages = folio_nr_pages(folio); > > /* Avoid migrating to a node that is nearly full */ > if (!migrate_balanced_pgdat(pgdat, nr_pages)) { > @@ -2516,22 +2515,23 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page) > if (managed_zone(pgdat->node_zones + z)) > break; > } > - wakeup_kswapd(pgdat->node_zones + z, 0, order, ZONE_MOVABLE); > + wakeup_kswapd(pgdat->node_zones + z, 0, > + folio_order(folio), ZONE_MOVABLE); > return 0; > } > > - if (!isolate_lru_page(page)) > + if (!folio_isolate_lru(folio)) > return 0; > > - mod_node_page_state(page_pgdat(page), NR_ISOLATED_ANON + page_is_file_lru(page), > + node_stat_mod_folio(folio, NR_ISOLATED_ANON + folio_is_file_lru(folio), > nr_pages); > > /* > - * Isolating the page has taken another reference, so the > - * caller's reference can be safely dropped without the page > + * Isolating the folio has taken another reference, so the > + * caller's reference can be safely dropped without the folio > * disappearing underneath us during migration. > */ > - put_page(page); > + folio_put(folio); > return 1; > } > > @@ -2565,7 +2565,7 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma, > if (page_is_file_lru(page) && PageDirty(page)) > goto out; > > - isolated = numamigrate_isolate_page(pgdat, page); > + isolated = numamigrate_isolate_folio(pgdat, page_folio(page)); > if (!isolated) > goto out; > > -- > 2.43.0 > >