On Sat, Dec 23, 2023 at 5:28 AM Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> wrote: > > Add folio_alloc_node() to replace alloc_pages_node() and then use > folio APIs throughout instead of converting back to pages. > > Signed-off-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> > --- [...] > diff --git a/mm/slub.c b/mm/slub.c > index 35aa706dc318..261f01915d9b 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -3919,18 +3919,17 @@ EXPORT_SYMBOL(kmem_cache_alloc_node); > */ > static void *__kmalloc_large_node(size_t size, gfp_t flags, int node) > { > - struct page *page; > + struct folio *folio; > void *ptr = NULL; > unsigned int order = get_order(size); > > if (unlikely(flags & GFP_SLAB_BUG_MASK)) > flags = kmalloc_fix_flags(flags); > > - flags |= __GFP_COMP; > - page = alloc_pages_node(node, flags, order); > - if (page) { > - ptr = page_address(page); > - mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, > + folio = folio_alloc_node(flags, order, node); folio_alloc_node() ->__folio_alloc_node() ->__folio_alloc() ->page_rmappable_folio() ->folio_prep_large_rmappable() I think it's not intentional to call this? > + if (folio) { > + ptr = folio_address(folio); > + lruvec_stat_mod_folio(folio, NR_SLAB_UNRECLAIMABLE_B, > PAGE_SIZE << order); > } Thanks, Hyeonggon