Re: Re: Re: Re: [PATCH 2/2] libbpf: BPF programs dynamic loading and attaching

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Feb 12, 2025 at 2:31 PM Martin Kelly
<martin.kelly@xxxxxxxxxxxxxxx> wrote:
>
> On Mon, 2025-02-10 at 16:06 -0800, Andrii Nakryiko wrote:
> > > Tracking associated maps for a program is not necessary. As long as
> > > the last BPF program using the BPF map is unloaded, the kernel will
> > > automatically free not-anymore-referenced BPF map. Note that
> > > bpf_object itself will keep FDs for BPF maps, so you'd need to make
> > > sure to do bpf_object__close() to release those references.
> > >
> > > But if you are going to ask to re-create BPF maps next time BPF
> > > program is loaded... Well, I'll say you are asking for a bit too >
> > > much,
> > > tbh. If you want to be *that* sophisticated, it shouldn't be too
> > > hard
> > > for you to get all this information from BPF program's
> > > instructions.
> > >
>
> We really are that sophisticated (see below for more details). We could
> scan program instructions, but we'd then tie our logic to BPF
> implementation details and duplicate logic already present in libbpf
> (https://elixir.bootlin.com/linux/v6.13.2/source/tools/lib/bpf/libbpf.c#L6087
> ). Obviously this *can* be done but it's not at all ideal from an
> application perspective.
>

I agree it's not ideal, but it's also not some complicated and
bound-to-be-changed logic. What you point out in libbpf source code is
a bit different thing, reality is much simpler. Only so-called ldimm64
instruction (BPF_LD | BPF_IMM | BPF_DW opcode) can be referencing map
FD, so analysing this is borderline trivial. And this is part of BPF
ISA, so not going to change.

We need to double check, but I think libbpf doesn't use FD_ARRAY
approach, unless you are using light skeleton, so if you don't you
don't even have to worry about FD_ARRAY thing.

>
> > > > >
> > > bpf_object is the unit of coherence in libbpf, so I don't see us
> > > refcounting maps between bpf_objects. Kernel is doing refcounting
> > > based on FDs, so see if you can use that.
> > >
>
> I can understand that. That said, I think if there's no logic across
> objects, and bpf_object access is not thread-safe, it puts us into a
> tough situation:
> - Complex refcounting, code scanning, etc to keep consistency when
> manipulating maps used by multiple programs.
> - Parallel loading not being well-balanced, if we split programs across
> objects.
>
> We could alternatively write our own custom loader, but then we’d have
> to duplicate much of the useful logic that libbpf already implements:
> skeleton generation, map/program association, embedding programs into
> ELFs, loading logic and kernel probing, etc. We’d like some way to
> handle dynamic/parallel loading without having to replicate all the
> advantages libbpf grants us.
>

Yeah, I can understand that as well, but bpf_object's single-threaded
design and the fact that bpf_object__load is kind of the final step
where programs are loaded (or not) is pretty backed in. I don't see
bpf_object becoming multi-threaded. The dynamic program
loading/unloading/loading again is something that I can't yet justify,
tbh.

So the best I can propose you is to use libbpf's skeleton and
bpf_object concept for, effectively, ELF handling, relocations, all
the preparations up to loading BPF programs. And after that you can
take over loading and handling program lifetime outside of bpf_object.

Dynamic map creation after bpf_object__load() I think is completely
outside of the scope and you'll have to solve this problem for
yourself. I would point out, though, that internally libbpf already
switched to sort-of pre-creating stable FDs for maps before they are
actually created in the kernel. So it's conceivable that we can have
more granularity in bpf_object preparation. I.e., first step would be
to parse ELF and handle relocations, prepare everything. After that we
can have a step to create maps, and then another one to create
programs. Usually people would do all that, but you can stop right
before maps creation or before program creation, whatever fits your
use case better.

The key is that program instructions will be final and won't need
adjustments regardless of maps actually being created or not. FDs, as
I mentioned, are stable regardless.

So, not ideal for your (very complicated) use case, but you still
avoid dealing with all the ELF and relocation stuff (which is the
annoying and rather complicated part, and I can see no one wanting to
reimplement that). Map and program creation is relatively
straightforward matters compared to that.

> > >
> > >
> > > Is 100 just a nicely looking rather large number, or do you really
> > > have 100 different BPF programs? Why so many and are they really
> > > all
> > > unique?
> > >
> > > Asking because if it's just a way to attach BPF program doing more
> > > or
> > > less uniform set of actions for different hooks, then perhaps there
> > > are better ways to do this without having to duplicating BPF
> > > programs
> > > so much (like BPF cookie, multi-kprobes, etc, etc)
>
> 100 is not an arbitrary number; we have that and higher (~200 is a good
> current estimate, and that grows as new product features are added).
> The programs are really doing different things. We also have to support
> a wide range of kernels, handling cases like: "on this kernel range,
> trampolines aren't supported, so use kretprobes with a context map for
> function args instead of fexit, but on newer kernels just use an fexit
> hook."

Yes, this is typical, and bpf_program__set_autoload() and
bpf_map__set_autocreate() are meant to handle that. It's the program
loading after bpf_object load is what is not supported.

>
> The use case here is that our security monitoring agent leverages eBPF
> as its foundational technology to gather telemetry from the kernel. As
> part of that, we hook many different kernel subsystems (process,
> memory, filesystem, network, etc), tying them together and tracking
> with maps. So we legitimately have a very large number of programs all
> doing different work. For products of this scale, it increases security
> and performance to load this set of programs and their maps in an
> optimized, parallel fashion and subsequently change the loaded set of
> programs and maps dynamically without disturbing the rest of the
> application.

Yes, makes sense. You'll need to decide for yourself if it's actually
more meaningful to split those 200 programs into independent
bpf_objects by features, and be rigorous about sharing state (maps)
through bpf_map__reuse_fd(), which would allow to parallelize loading
within confines of existing libbpf APIs. Or you can be a bit more
low-level with program loading outside of bpf_object API, as I
described above.





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux