On Wed, Jul 06, 2022 at 10:50:34AM -0700, Alexei Starovoitov wrote: > On Mon, Jul 04, 2022 at 09:34:23PM +0100, Matthew Wilcox wrote: > > On Mon, Jun 27, 2022 at 12:03:08AM -0700, Christoph Hellwig wrote: > > > I'd suggest you discuss you needs with the slab mainainers and the mm > > > community firs. > > > > > > On Wed, Jun 22, 2022 at 05:32:25PM -0700, Alexei Starovoitov wrote: > > > > From: Alexei Starovoitov <ast@xxxxxxxxxx> > > > > > > > > Introduce any context BPF specific memory allocator. > > > > > > > > Tracing BPF programs can attach to kprobe and fentry. Hence they > > > > run in unknown context where calling plain kmalloc() might not be safe. > > > > Front-end kmalloc() with per-cpu per-bucket cache of free elements. > > > > Refill this cache asynchronously from irq_work. > > > > I can't tell from your description whether a bump allocator would work > > for you. That is, can you tell which allocations need to persist past > > program execution (and use kmalloc for them) and which can be freed as > > soon as the program has finished (and can use the bump allocator)? > > > > If so, we already have one for you, the page_frag allocator > > (Documentation/vm/page_frags.rst). It might need to be extended to meet > > your needs, but it's certainly faster than the kmalloc allocator. > > Already looked at it, and into mempool, and everything we could find. > All 'normal' allocators sooner or later synchornously call into page_alloc, Today it does, yes. But it might be adaptable to your needs if only I knew what those needs were. For example, I assume that a BPF program has a fairly tight limit on how much memory it can cause to be allocated. Right?