Alexei and I spent some time today going back and forth on what the uapi to this allocator should look like in a BPF program. To both of our surprise, the problem space became far more complicated than we anticipated. There are three primary problems we have to solve: 1) Knowing which allocator an object came from, so we can safely reclaim it when necessary (e.g., freeing a map). 2) Type confusion between local and kernel types. (I.e., a program allocating kernel types and passing them to helpers/kfuncs that don't expect them). This is especially important because the existing kptr mechanism assumes kernel types everywhere. 3) Allocated objects lifetimes, allocator refcounting, etc. It all gets very hairy when you allow allocated objects in pinned maps. This is the proposed design that we landed on: 1. Allocators get their own MAP_TYPE_ALLOCATOR, so you can specify initial capacity at creation time. Value_size > 0 takes the kmem_cache path. Probably with btf_value_type_id enforcement for the kmem_cache path. 2. The helper APIs are just bpf_obj_alloc(bpf_map *, bpf_core_type_id_local(struct foo)) and bpf_obj_free(void *). Note that obj_free() only takes an object pointer. 3. To avoid mixing BTF type domains, a new type tag (provisionally __kptr_local) annotates fields that can hold values with verifier type `PTR_TO_BTF_ID | BTF_ID_LOCAL`. obj_alloc only ever returns these local kptrs and only ever resolves against program-local btf (in the verifier, at runtime it only gets an allocation size). 3.1. If eventually we need to pass these objects to kfuncs/helpers, we can introduce a new bpf_obj_export helper that takes a PTR_TO_LOCAL_BTF_ID and returns the corresponding PTR_TO_BTF_ID, after verifying against an allowlist of some kind. This would be the only place these objects can leak out of bpf land. If there's no runtime aspect (and there likely wouldn't be), we might consider doing this transparently, still against an allowlist of types. 4. To ensure the allocator stays alive while objects from it are alive, we must be able to identify which allocator each __kptr_local pointer came from, and we must keep the refcount up while any such values are alive. One concern here is that doing the refcount manipulation in kptr_xchg would be too expensive. The proposed solution is to: 4.1 Keep a struct bpf_mem_alloc* in the header before the returned object pointer from bpf_mem_alloc(). This way we never lose track which bpf_mem_alloc to return the object to and can simplify the bpf_obj_free() call. 4.2. Tracking used_allocators in each bpf_map. When unloading a program, we would walk all maps that the program has access to (that have kptr_local fields), walk each value and ensure that any allocators not already in the map's used_allocators are refcount_inc'd and added to the list. Do note that allocators are also kept alive by their bpf_map wrapper but after that's gone, used_allocators is the main mechanism. Once the bpf_map is gone, the allocator cannot be used to allocate new objects, we can only return objects to it. 4.3. On map free, we walk and obj_free() all the __kptr_local fields, then refcount_dec all the used_allocators. Overall, we think this handles all the nasty corners - objects escaping into kfuncs/helpers when they shouldn't, pinned maps containing pointers to allocations, programs accessing multiple allocators having deterministic freelist behaviors - while keeping the API and complexity sane. The used_allocators approach can certainly be less conservative (or can be even precise) but for a v1 that's probably overkill. Please, feel free to shoot holes in this design! We tried to capture everything but I'd love confirmation that we didn't miss anything. --Delyan