On Fri, Nov 22, 2024 at 01:54:58PM -0800, Alexei Starovoitov wrote: > On Fri, Nov 22, 2024 at 1:12 AM Matt Bobrowski <mattbobrowski@xxxxxxxxxx> wrote: > > > > On Wed, Nov 20, 2024 at 04:51:36PM -0800, Alexei Starovoitov wrote: > > > On Tue, Nov 19, 2024 at 4:22 AM Matt Bobrowski <mattbobrowski@xxxxxxxxxx> wrote: > > > > > > > > Hi, > > > > > > > > Currently, we have BPF kfuncs which allow BPF programs to add and > > > > remove elements from a BPF linked list. However, we're currently > > > > missing other simple capabilities, like being able to iterate over the > > > > elements within the BPF linked lists. What is our current appetite > > > > with regards to adding new BPF kfuncs that support this kind of > > > > capability to BPF linked lists? > > > > > > What kind of kfuncs do you have in mind for link lists ? > > > > At this point, it'd have to be some kind of iterator based BPF kfunc > > that allows a BPF program to traverse over the supplied BPF linked > > list, coupled with a delete based BPF kfunc such that elements from > > the list can be removed whilst performing the iteration > > i.e. list_for_each_safe/list_del. Now that I know you're not > > completely opposed to adding such BPF kfuncs, I can concretely start > > thinking about what this will actually end up looking like. But > > essentially, it'd need to be BPF kfuncs that support those 2 > > previously mentioned capabilities, being traversal and arbitrary > > removal of an element whilst performing the traversal. > > iterator and removal would need to be done while the lock is held. > There is no support for such things in the verifier. > I don't think it will be easy. Hm, I was under the impression that this would rather be trivial to add, but you'd obviously have more of an idea than I. Let me think about how I want to go about adding these BPF kfuncs and come back to you with a proposal. > > > So far the only user of bpf_rbtree is sched-ext. > > > It was used in one scheduler and the experience was painful. > > > There is a chance that we will remove rbtree and link list > > > support from the verifier to reduce complexity. > > > So new link list kfuncs may be ok, but potentially not for too long. > > > > Noted. > > > > > > I know that we're now somewhat advocating for using BPF arenas > > > > whenever and wherever possible, especially when it comes to building > > > > out and supporting more complicated data structures in BPF. However, > > > > IMO BPF linked lists still have their place. Specifically, and as of > > > > now, I'd argue that the BPF linked list implementation could be > > > > considered more memory efficient when compared to a BPF arena backed > > > > linked list implementation. This is purely due to the fact that the > > > > BPF linked list implementation can perform more constrained memory > > > > allocations for elements via bpf_obj_new_impl() based on the demand, > > > > whereas for a BPF arena based implementation a BPF program needs to > > > > allocate memory upfront in terms of the number of pages (modulo the > > > > fact that not all pages for the BPF arena will necessarily be reserved > > > > upfront). The fact that allocations are performed in terms of > > > > multiples of PAGE_SIZE can lead to unnecessary memory wastage. > > > > > > I don't follow this logic. > > > bpf_mem_alloc is relying on slab that relies on page alloc. > > > So either arena or bpf_ma allocates a page at a time. > > > So from that pov the cost is the same. > > > > Oh, what? So, both are actually performing full page sized allocations > > whenever there's a need to fetch more memory? My shallow understanding > > at this point was that the BPF specific memory allocator simply acts > > as a front-end cache to kmalloc(), and depending on the size of your > > allocation request i.e. via bpf_obj_new_impl for example, depends on > > what freelist that allocation is fulfilled from. Any needs for > > refilling a freelist due to exhaustion pressure are performed at > > freelist size granularity i.e. 16, 32, 64, 128, 256..., 4096. > > Correct, but how do you think kmalloc works? > It's a slab on top of the buddy page allocator. > Same thing at the end. Right, I can see what you mean by this. So, whether you're memory allocation request is being fulfilled by the BPF specific memory allocator or BPF arena, in the end if you're in need of more memory you're technically interfacing with the same backend i.e. Buddy Allocator, and that in itself still ends up managing memory in page-sized chunk granularity. I just thought that given that kmalloc() interfaces with the middle-end, being the SLUB/SLAB/SLOB allocator in this case, it too has its allocation requests served from pre-allocated caches. So, in theory, it'd mean that when the BPF specific memory allocator needs more memory those memory requests would be fulfilled from this layer of cache first prior to actually going all the way down to the Buddy Allocator and getting more pages of memory directly from it. Admittedly though, I have not looked at how BPF arena memory allocation requests are fulfilled, so it could actually just end up taking the exact same route. Maybe I'll have a look at that tonight as I'm curious now.