Re: [PATCH v3 bpf-next 00/15] bpf: BPF specific memory allocator, UAPI in particular

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for taking a look, Kumar!

On Fri, 2022-08-26 at 06:03 +0200, Kumar Kartikeya Dwivedi wrote:
> 
> On Thu, 25 Aug 2022 at 02:56, Delyan Kratunov <delyank@xxxxxx> wrote:
> > 
> > Alexei and I spent some time today going back and forth on what the uapi to this
> > allocator should look like in a BPF program. To both of our surprise, the problem
> > space became far more complicated than we anticipated.
> > 
> > There are three primary problems we have to solve:
> > 1) Knowing which allocator an object came from, so we can safely reclaim it when
> > necessary (e.g., freeing a map).
> > 2) Type confusion between local and kernel types. (I.e., a program allocating kernel
> > types and passing them to helpers/kfuncs that don't expect them). This is especially
> > important because the existing kptr mechanism assumes kernel types everywhere.
> 
> Why is the btf_is_kernel(reg->btf) check not enough to distinguish
> local vs kernel kptr?

Answered below.

> We add that wherever kfunc/helpers verify the PTR_TO_BTF_ID right now.
> 
> Fun fact: I added a similar check on purpose in map_kptr_match_type,
> since Alexei mentioned back then he was working on a local type
> allocator, so forgetting to add it later would have been a problem.
> 
> > 3) Allocated objects lifetimes, allocator refcounting, etc. It all gets very hairy
> > when you allow allocated objects in pinned maps.
> > 
> > This is the proposed design that we landed on:
> > 
> > 1. Allocators get their own MAP_TYPE_ALLOCATOR, so you can specify initial capacity
> > at creation time. Value_size > 0 takes the kmem_cache path. Probably with
> > btf_value_type_id enforcement for the kmem_cache path.
> > 
> > 2. The helper APIs are just bpf_obj_alloc(bpf_map *, bpf_core_type_id_local(struct
> > foo)) and bpf_obj_free(void *). Note that obj_free() only takes an object pointer.
> > 
> > 3. To avoid mixing BTF type domains, a new type tag (provisionally __kptr_local)
> > annotates fields that can hold values with verifier type `PTR_TO_BTF_ID |
> > BTF_ID_LOCAL`. obj_alloc only ever returns these local kptrs and only ever resolves
> > against program-local btf (in the verifier, at runtime it only gets an allocation
> > size).
> 
> This is ok too, but I think just gating everywhere with btf_is_kernel
> would be fine as well.


Yeah, I can get behind not using BTF_LOCAL_ID as a type flag and just encoding that
in the btf field of the register/stack slot/kptr/helper proto. That said, we still
need the new type tag to tell the map btf parsing code to use the local btf in the
kptr descriptor.

> 
> > 3.1. If eventually we need to pass these objects to kfuncs/helpers, we can introduce
> > a new bpf_obj_export helper that takes a PTR_TO_LOCAL_BTF_ID and returns the
> > corresponding PTR_TO_BTF_ID, after verifying against an allowlist of some kind. This
> 
> It would be fine to allow passing if it is just plain data (e.g. what
> scalar_struct check does for kfuncs).
> There we had the issue where it can take PTR_TO_MEM, PTR_TO_BTF_ID,
> etc. so it was necessary to restrict the kind of type to LCD.
> 
> But we don't have to do it from day 1, just listing what should be ok.

That's a good call, I'll add it to the initial can-transition-to-kernel-kptr logic.

> 
> > would be the only place these objects can leak out of bpf land. If there's no runtime
> > aspect (and there likely wouldn't be), we might consider doing this transparently,
> > still against an allowlist of types.
> > 
> > 4. To ensure the allocator stays alive while objects from it are alive, we must be
> > able to identify which allocator each __kptr_local pointer came from, and we must
> > keep the refcount up while any such values are alive. One concern here is that doing
> > the refcount manipulation in kptr_xchg would be too expensive. The proposed solution
> > is to:
> > 4.1 Keep a struct bpf_mem_alloc* in the header before the returned object pointer
> > from bpf_mem_alloc(). This way we never lose track which bpf_mem_alloc to return the
> > object to and can simplify the bpf_obj_free() call.
> > 4.2. Tracking used_allocators in each bpf_map. When unloading a program, we would
> > walk all maps that the program has access to (that have kptr_local fields), walk each
> > value and ensure that any allocators not already in the map's used_allocators are
> > refcount_inc'd and added to the list. Do note that allocators are also kept alive by
> > their bpf_map wrapper but after that's gone, used_allocators is the main mechanism.
> > Once the bpf_map is gone, the allocator cannot be used to allocate new objects, we
> > can only return objects to it.
> > 4.3. On map free, we walk and obj_free() all the __kptr_local fields, then
> > refcount_dec all the used_allocators.
> > 
> 
> So to summarize your approach:
> Each allocation has a bpf_mem_alloc pointer before it to track its
> owner allocator.
> We know used_maps of each prog, so during unload of program, walk all
> local kptrs in each used_maps map values, and that map takes a
> reference to the allocator stashing it in used_allocators list,
> because prog is going to relinquish its ref to allocator_map (which if
> it were the last one would release allocator reference as well for
> local kptrs held by those maps).
> Once prog is gone, the allocator is kept alive by other maps holding
> objects allocated from it. References to the allocator are taken
> lazily when required.
> Did I get it right?

That's correct!

> 
> I see two problems: the first is concurrency. When walking each value,
> it is going to be hard to ensure the kptr field remains stable while
> you load and take ref to its allocator. Some other programs may also
> have access to the map value and may concurrently change the kptr
> field (xchg and even release it). How do we safely do a refcount_inc
> of its allocator?

Fair question. You can think of that pointer as immutable for the entire time that
the allocator is able to interact with the object. Once the object makes it on a
freelist, it won't be released until an rcu qs. Therefore, the first time that value
can change - when we return the object to the global kmalloc pool - it has provably
no bpf-side concurrent observers.

Alexei, please correct me if I misunderstood how the design is supposed to work.

> 
> For the second problem, consider this:
> obj = bpf_obj_alloc(&alloc_map, ...);
> inner_map = bpf_map_lookup_elem(&map_in_map, ...);
> map_val = bpf_map_lookup_elem(inner_map, ...);
> kptr_xchg(&map_val->kptr, obj);
> 
> Now delete the entry having that inner_map, but keep its fd open.
> Unload the program, since it is map-in-map, no way to fill used_allocators.
> alloc_map is freed, releases reference on allocator, allocator is freed.
> Now close(inner_map_fd), inner_map is free. Either bad unit_free or memory leak.
> Is there a way to prevent this in your scheme?

This is fair, inner maps not being tracked in used_maps is a wrench in that plan. 

> -
> 
> I had another idea, but it's not _completely_ 0 overhead. Heavy
> prototyping so I might be missing corner cases.
> It is to take reference on each allocation and deallocation. Yes,
> naive and slow if using atomics, but instead we can use percpu_ref
> instead of atomic refcount for the allocator. percpu_ref_get/put on
> each unit_alloc/unit_free.
> The problem though is that once initial reference is killed, it
> downgrades to atomic, which will kill performance. So we need to be
> smart about how that initial reference is managed.
> My idea is that the initial ref is taken and killed by the allocator
> bpf_map pinning the allocator. Once that bpf_map is gone, you cannot
> do any more allocations anyway (since you need to pass the map pointer
> to bpf_obj_alloc), so once it downgrades to atomics at that point we
> will only be releasing the references after freeing its allocated
> objects. Yes, then the free path becomes a bit costly after the
> allocator map is gone.
> 
> We might be able to remove the cost on free path as well using the
> used_allocators scheme from above (to delay percpu_ref_kill), but it
> is not clear how to safely increment the ref of the allocator from map
> value...

As explained above, the values are already rcu-protected, so we can use that to
coordinate refcounting of the allocator. That said, percpu_ref could work (I was
considering something similar within the allocator itself) but I'm not convinced
about the cost. My concern is that once it becomes atomic_t, it erases the benefits
of all the work in the allocator that maintains percpu data structures.

I wonder if the allocator should maintain percpu live counts (with underflow for
unbalanced alloc-free pairs on a cpu) in its percpu structures. Then, we can have
explicit "sum up all the counts to discover if you should be destroyed" calls. If we
keep the used_allocators scheme, these calls can be inserted at program unload for
maps in used_maps and at map free time for maps that escape that mechanism - map goes
over all its used_allocators and have them confirm the liveness count is > 0. 

I think doing it this way we cover the hole with map-in-map without regressing any
path. 

Thoughts?

> 
> wdyt?
> 
> > Overall, we think this handles all the nasty corners - objects escaping into
> > kfuncs/helpers when they shouldn't, pinned maps containing pointers to allocations,
> > programs accessing multiple allocators having deterministic freelist behaviors -
> > while keeping the API and complexity sane. The used_allocators approach can certainly
> > be less conservative (or can be even precise) but for a v1 that's probably overkill.
> > 
> > Please, feel free to shoot holes in this design! We tried to capture everything but
> > I'd love confirmation that we didn't miss anything.
> > 
> > --Delyan





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux