On 1/15/25 03:17, Alexei Starovoitov wrote: > From: Alexei Starovoitov <ast@xxxxxxxxxx> > > Introduce free_pages_nolock() that can free pages without taking locks. > It relies on trylock and can be called from any context. > Since spin_trylock() cannot be used in RT from hard IRQ or NMI > it uses lockless link list to stash the pages which will be freed > by subsequent free_pages() from good context. > > Do not use llist unconditionally. BPF maps continuously > allocate/free, so we cannot unconditionally delay the freeing to > llist. When the memory becomes free make it available to the > kernel and BPF users right away if possible, and fallback to > llist as the last resort. > > Signed-off-by: Alexei Starovoitov <ast@xxxxxxxxxx> Acked-by: Vlastimil Babka <vbabka@xxxxxxx> With: > @@ -4853,6 +4905,17 @@ void __free_pages(struct page *page, unsigned int order) > } > EXPORT_SYMBOL(__free_pages); > > +/* > + * Can be called while holding raw_spin_lock or from IRQ and NMI, > + * but only for pages that came from try_alloc_pages(): > + * order <= 3, !folio, etc I think order > 3 is fine, as !pcp_allowed_order() case is handled too? And what does "!folio" mean? > + */ > +void free_pages_nolock(struct page *page, unsigned int order) > +{ > + if (put_page_testzero(page)) > + __free_unref_page(page, order, FPI_TRYLOCK); Hmm this will reach reset_page_owner() and thus stackdepot so same mental note as for patch 1. > +} > + > void free_pages(unsigned long addr, unsigned int order) > { > if (addr != 0) {