On Sun, Nov 17, 2024 at 2:54 AM Vlastimil Babka <vbabka@xxxxxxx> wrote: > > On 11/16/24 02:48, Alexei Starovoitov wrote: > > From: Alexei Starovoitov <ast@xxxxxxxxxx> > > > > Tracing BPF programs execute from tracepoints and kprobes where running > > context is unknown, but they need to request additional memory. > > The prior workarounds were using pre-allocated memory and BPF specific > > freelists to satisfy such allocation requests. Instead, introduce > > __GFP_TRYLOCK flag that makes page allocator accessible from any context. > > It relies on percpu free list of pages that rmqueue_pcplist() should be > > able to pop the page from. If it fails (due to IRQ re-entrancy or list > > being empty) then try_alloc_page() attempts to spin_trylock zone->lock > > and refill percpu freelist as normal. > > BPF program may execute with IRQs disabled and zone->lock is sleeping in RT, > > so trylock is the only option. > > In theory we can introduce percpu reentrance counter and increment it > > every time spin_lock_irqsave(&zone->lock, flags) is used, > > but we cannot rely on it. Even if this cpu is not in page_alloc path > > the spin_lock_irqsave() is not safe, since BPF prog might be called > > from tracepoint where preemption is disabled. So trylock only. > > > > There is no attempt to make free_page() to be accessible from any > > context (yet). BPF infrastructure will asynchronously free pages from > > such contexts. > > memcg is also not charged in try_alloc_page() path. It has to be > > done asynchronously to avoid sleeping on > > local_lock_irqsave(&memcg_stock.stock_lock, flags). > > > > This is a first step towards supporting BPF requirements in SLUB > > and getting rid of bpf_mem_alloc. > > That goal was discussed at LSFMM: https://lwn.net/Articles/974138/ > > Thanks for looking into this. I agree that something like __GFP_TRYLOCK > would be necessary to distinguish those allocation contexts. But I'm > wondering if the page allocator is the best place to start (or even > necessary in the end) It's necessary. More below. > if the goal is to replace the kmalloc-sized > allocations in bpf and not page sized? bpf_ma has support for page sized alloc, but freelist of one page isn't practical. bpf side needs to be able to request many pages one page at a time. There is no need for order >= 1 allocs and never going to be. One page at a time only. > SLUB could have preallocated slab > pages to not call into the page allocator in such situations? prealloc cannot be sized. In the early days of bpf we preallocated everything. When bpf map is created it has 'max_entries' limit. So we know how much to preallocate, but since maps got big (in networking use cases there are multiple muti-gigabytes maps) the full prealloc is a waste of memory. Hence bpf_ma was added to allocate when necessary. So far it's working ok for small allocations and free list watermark logic seems to be enough. But it doesn't work for page sized allocs. So for bpf arena we require sleepable context to populate pages and users are already complaining that we merely shifted a problem on to them. Now bpf progs have to preallocate in sleepable context, stash memory somewhere and use it later. And program author cannot know beforehand how much to preallocate. How many flows will the network load balancer see? > I've posted the first SLUB sheaves RFC this week [1]. The immediate > motivation was different, but I did mention there this could perhaps become > a basis for the bpf_mem_alloc replacement too. I'd imagine something like a > set of kmem_buckets with sheaves enabled and either a flag like > __GFP_TRYLOCK or a different variant of kmalloc() to only try using the > sheaves. Didn't Cc you/bpf as if seemed too vague yet, but guess I should have. > > [1] > https://lore.kernel.org/all/20241112-slub-percpu-caches-v1-0-ddc0bdc27e05@xxxxxxx/#t Thanks for the link. I see why you think this cache array may help maple_tree (though I'm sceptical without seeing performance numbers), but I don't see how it helps bpf. bpf_ma is not about performance. We tried to make it fast, but it wasn't a goal. The main objective is to have a reasonable chance of getting memory in any context. afaict sheaves don't help bpf_ma. In the patch [PATCH RFC 5/6] mm, slub: cheaper locking for percpu sheaves you're calling it cpu_sheaves_trylock(), but then you do: local_lock(&ptr->member.llock); how is this a try lock? This lock breaks slab re-entrancy and lockdep will complain if it sees local_lock() and raw spinlock is already held. So the way sheaves are coded they're no go for bpf use cases. In this GFP_TRYLOOCK patch see this comment: + * Do not specify __GFP_ACCOUNT to avoid local_lock. That's an important part. No locks including local_locks should be attempted to be taken with GFP_TRYLOCK. The plan to replace bpf_ma has this rough steps: 1. try_alloc_page_noprof() that uses try lock only. This patch. 2. kmalloc() support for GFP_TRYLOCK that does its double cmpxchg and when new_slab() is needed it's called with TRYLOCK too. So all slub and page allocs are not blocking. The __free_pages() part is a bit tricky, since it needs to add a page to the free list, but it has to use trylock and there are no gfp flags. I'm thinking to use llist when trylock fails and truly add the page to the freelist on the next call to free_pages that doesn't have TRYLOCK flag.