On 2025-01-14 18:17:41 [-0800], Alexei Starovoitov wrote: > From: Alexei Starovoitov <ast@xxxxxxxxxx> > > Introduce free_pages_nolock() that can free pages without taking locks. > It relies on trylock and can be called from any context. > Since spin_trylock() cannot be used in RT from hard IRQ or NMI > it uses lockless link list to stash the pages which will be freed > by subsequent free_pages() from good context. > > Do not use llist unconditionally. BPF maps continuously > allocate/free, so we cannot unconditionally delay the freeing to > llist. When the memory becomes free make it available to the > kernel and BPF users right away if possible, and fallback to > llist as the last resort. > > Signed-off-by: Alexei Starovoitov <ast@xxxxxxxxxx> Acked-by: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx> … > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 74c2a7af1a77..a9c639e3db91 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -1247,13 +1250,44 @@ static void split_large_buddy(struct zone *zone, struct page *page, … > static void free_one_page(struct zone *zone, struct page *page, > unsigned long pfn, unsigned int order, > fpi_t fpi_flags) > { > + struct llist_head *llhead; > unsigned long flags; > > - spin_lock_irqsave(&zone->lock, flags); > + if (!spin_trylock_irqsave(&zone->lock, flags)) { > + if (unlikely(fpi_flags & FPI_TRYLOCK)) { > + add_page_to_zone_llist(zone, page, order); > + return; > + } > + spin_lock_irqsave(&zone->lock, flags); > + } > + > + /* The lock succeeded. Process deferred pages. */ > + llhead = &zone->trylock_free_pages; > + if (unlikely(!llist_empty(llhead) && !(fpi_flags & FPI_TRYLOCK))) { Thank you. > + struct llist_node *llnode; > + struct page *p, *tmp; > + > + llnode = llist_del_all(llhead); > + llist_for_each_entry_safe(p, tmp, llnode, pcp_llist) { > + unsigned int p_order = p->order; > + > + split_large_buddy(zone, p, page_to_pfn(p), p_order, fpi_flags); > + __count_vm_events(PGFREE, 1 << p_order); > + } > + } Sebastian