On Tue, Dec 17, 2024 at 8:59 PM Yosry Ahmed <yosryahmed@xxxxxxxxxx> wrote: > > On Tue, Dec 17, 2024 at 7:07 PM <alexei.starovoitov@xxxxxxxxx> wrote: > > > > From: Alexei Starovoitov <ast@xxxxxxxxxx> > > > > Introduce free_pages_nolock() that can free pages without taking locks. > > It relies on trylock and can be called from any context. > > Since spin_trylock() cannot be used in RT from hard IRQ or NMI > > it uses lockless link list to stash the pages which will be freed > > by subsequent free_pages() from good context. > > > > Signed-off-by: Alexei Starovoitov <ast@xxxxxxxxxx> > > --- > > include/linux/gfp.h | 1 + > > include/linux/mm_types.h | 4 ++ > > include/linux/mmzone.h | 3 ++ > > mm/page_alloc.c | 79 ++++++++++++++++++++++++++++++++++++---- > > 4 files changed, 79 insertions(+), 8 deletions(-) > > > > diff --git a/include/linux/gfp.h b/include/linux/gfp.h > > index 65b8df1db26a..ff9060af6295 100644 > > --- a/include/linux/gfp.h > > +++ b/include/linux/gfp.h > > @@ -372,6 +372,7 @@ __meminit void *alloc_pages_exact_nid_noprof(int nid, size_t size, gfp_t gfp_mas > > __get_free_pages((gfp_mask) | GFP_DMA, (order)) > > > > extern void __free_pages(struct page *page, unsigned int order); > > +extern void free_pages_nolock(struct page *page, unsigned int order); > > extern void free_pages(unsigned long addr, unsigned int order); > > > > #define __free_page(page) __free_pages((page), 0) > > diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h > > index 7361a8f3ab68..52547b3e5fd8 100644 > > --- a/include/linux/mm_types.h > > +++ b/include/linux/mm_types.h > > @@ -99,6 +99,10 @@ struct page { > > /* Or, free page */ > > struct list_head buddy_list; > > struct list_head pcp_list; > > + struct { > > + struct llist_node pcp_llist; > > + unsigned int order; > > + }; > > }; > > /* See page-flags.h for PAGE_MAPPING_FLAGS */ > > struct address_space *mapping; > > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h > > index b36124145a16..1a854e0a9e3b 100644 > > --- a/include/linux/mmzone.h > > +++ b/include/linux/mmzone.h > > @@ -953,6 +953,9 @@ struct zone { > > /* Primarily protects free_area */ > > spinlock_t lock; > > > > + /* Pages to be freed when next trylock succeeds */ > > + struct llist_head trylock_free_pages; > > + > > /* Write-intensive fields used by compaction and vmstats. */ > > CACHELINE_PADDING(_pad2_); > > > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > index d23545057b6e..10918bfc6734 100644 > > --- a/mm/page_alloc.c > > +++ b/mm/page_alloc.c > > @@ -88,6 +88,9 @@ typedef int __bitwise fpi_t; > > */ > > #define FPI_TO_TAIL ((__force fpi_t)BIT(1)) > > > > +/* Free the page without taking locks. Rely on trylock only. */ > > +#define FPI_TRYLOCK ((__force fpi_t)BIT(2)) > > + > > The comment above the definition of fpi_t mentions that it's for > non-pcp variants of free_pages(), so I guess that needs to be updated > in this patch. No. The comment: /* Free Page Internal flags: for internal, non-pcp variants of free_pages(). */ typedef int __bitwise fpi_t; is still valid. Most of the objective of the FPI_TRYLOCK flag is used after pcp is over. > More importantly, I think the comment states this mainly because the > existing flags won't be properly handled when freeing pages to the > pcplist. The flags will be lost once the pages are added to the > pcplist, and won't be propagated when the pages are eventually freed > to the buddy allocator (e.g. through free_pcppages_bulk()). Correct. fpi_t flags have a local effect. Nothing new here.