Re: [PATCH v3 bpf-next 00/15] bpf: BPF specific memory allocator, UAPI in particular

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 25 Aug 2022 at 02:56, Delyan Kratunov <delyank@xxxxxx> wrote:
>
> Alexei and I spent some time today going back and forth on what the uapi to this
> allocator should look like in a BPF program. To both of our surprise, the problem
> space became far more complicated than we anticipated.
>
> There are three primary problems we have to solve:
> 1) Knowing which allocator an object came from, so we can safely reclaim it when
> necessary (e.g., freeing a map).
> 2) Type confusion between local and kernel types. (I.e., a program allocating kernel
> types and passing them to helpers/kfuncs that don't expect them). This is especially
> important because the existing kptr mechanism assumes kernel types everywhere.

Why is the btf_is_kernel(reg->btf) check not enough to distinguish
local vs kernel kptr?
We add that wherever kfunc/helpers verify the PTR_TO_BTF_ID right now.

Fun fact: I added a similar check on purpose in map_kptr_match_type,
since Alexei mentioned back then he was working on a local type
allocator, so forgetting to add it later would have been a problem.

> 3) Allocated objects lifetimes, allocator refcounting, etc. It all gets very hairy
> when you allow allocated objects in pinned maps.
>
> This is the proposed design that we landed on:
>
> 1. Allocators get their own MAP_TYPE_ALLOCATOR, so you can specify initial capacity
> at creation time. Value_size > 0 takes the kmem_cache path. Probably with
> btf_value_type_id enforcement for the kmem_cache path.
>
> 2. The helper APIs are just bpf_obj_alloc(bpf_map *, bpf_core_type_id_local(struct
> foo)) and bpf_obj_free(void *). Note that obj_free() only takes an object pointer.
>
> 3. To avoid mixing BTF type domains, a new type tag (provisionally __kptr_local)
> annotates fields that can hold values with verifier type `PTR_TO_BTF_ID |
> BTF_ID_LOCAL`. obj_alloc only ever returns these local kptrs and only ever resolves
> against program-local btf (in the verifier, at runtime it only gets an allocation
> size).

This is ok too, but I think just gating everywhere with btf_is_kernel
would be fine as well.

> 3.1. If eventually we need to pass these objects to kfuncs/helpers, we can introduce
> a new bpf_obj_export helper that takes a PTR_TO_LOCAL_BTF_ID and returns the
> corresponding PTR_TO_BTF_ID, after verifying against an allowlist of some kind. This

It would be fine to allow passing if it is just plain data (e.g. what
scalar_struct check does for kfuncs).
There we had the issue where it can take PTR_TO_MEM, PTR_TO_BTF_ID,
etc. so it was necessary to restrict the kind of type to LCD.

But we don't have to do it from day 1, just listing what should be ok.

> would be the only place these objects can leak out of bpf land. If there's no runtime
> aspect (and there likely wouldn't be), we might consider doing this transparently,
> still against an allowlist of types.
>
> 4. To ensure the allocator stays alive while objects from it are alive, we must be
> able to identify which allocator each __kptr_local pointer came from, and we must
> keep the refcount up while any such values are alive. One concern here is that doing
> the refcount manipulation in kptr_xchg would be too expensive. The proposed solution
> is to:
> 4.1 Keep a struct bpf_mem_alloc* in the header before the returned object pointer
> from bpf_mem_alloc(). This way we never lose track which bpf_mem_alloc to return the
> object to and can simplify the bpf_obj_free() call.
> 4.2. Tracking used_allocators in each bpf_map. When unloading a program, we would
> walk all maps that the program has access to (that have kptr_local fields), walk each
> value and ensure that any allocators not already in the map's used_allocators are
> refcount_inc'd and added to the list. Do note that allocators are also kept alive by
> their bpf_map wrapper but after that's gone, used_allocators is the main mechanism.
> Once the bpf_map is gone, the allocator cannot be used to allocate new objects, we
> can only return objects to it.
> 4.3. On map free, we walk and obj_free() all the __kptr_local fields, then
> refcount_dec all the used_allocators.
>

So to summarize your approach:
Each allocation has a bpf_mem_alloc pointer before it to track its
owner allocator.
We know used_maps of each prog, so during unload of program, walk all
local kptrs in each used_maps map values, and that map takes a
reference to the allocator stashing it in used_allocators list,
because prog is going to relinquish its ref to allocator_map (which if
it were the last one would release allocator reference as well for
local kptrs held by those maps).
Once prog is gone, the allocator is kept alive by other maps holding
objects allocated from it. References to the allocator are taken
lazily when required.
Did I get it right?

I see two problems: the first is concurrency. When walking each value,
it is going to be hard to ensure the kptr field remains stable while
you load and take ref to its allocator. Some other programs may also
have access to the map value and may concurrently change the kptr
field (xchg and even release it). How do we safely do a refcount_inc
of its allocator?

For the second problem, consider this:
obj = bpf_obj_alloc(&alloc_map, ...);
inner_map = bpf_map_lookup_elem(&map_in_map, ...);
map_val = bpf_map_lookup_elem(inner_map, ...);
kptr_xchg(&map_val->kptr, obj);

Now delete the entry having that inner_map, but keep its fd open.
Unload the program, since it is map-in-map, no way to fill used_allocators.
alloc_map is freed, releases reference on allocator, allocator is freed.
Now close(inner_map_fd), inner_map is free. Either bad unit_free or memory leak.
Is there a way to prevent this in your scheme?

--

I had another idea, but it's not _completely_ 0 overhead. Heavy
prototyping so I might be missing corner cases.
It is to take reference on each allocation and deallocation. Yes,
naive and slow if using atomics, but instead we can use percpu_ref
instead of atomic refcount for the allocator. percpu_ref_get/put on
each unit_alloc/unit_free.
The problem though is that once initial reference is killed, it
downgrades to atomic, which will kill performance. So we need to be
smart about how that initial reference is managed.
My idea is that the initial ref is taken and killed by the allocator
bpf_map pinning the allocator. Once that bpf_map is gone, you cannot
do any more allocations anyway (since you need to pass the map pointer
to bpf_obj_alloc), so once it downgrades to atomics at that point we
will only be releasing the references after freeing its allocated
objects. Yes, then the free path becomes a bit costly after the
allocator map is gone.

We might be able to remove the cost on free path as well using the
used_allocators scheme from above (to delay percpu_ref_kill), but it
is not clear how to safely increment the ref of the allocator from map
value...

wdyt?

> Overall, we think this handles all the nasty corners - objects escaping into
> kfuncs/helpers when they shouldn't, pinned maps containing pointers to allocations,
> programs accessing multiple allocators having deterministic freelist behaviors -
> while keeping the API and complexity sane. The used_allocators approach can certainly
> be less conservative (or can be even precise) but for a v1 that's probably overkill.
>
> Please, feel free to shoot holes in this design! We tried to capture everything but
> I'd love confirmation that we didn't miss anything.
>
> --Delyan



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux