On 09/08, Marcelo Juchem wrote:
I'm not sure I can run all definite tests to know whether the program
is loadable or not. I wouldn't even know what the tests should be in
all cases.
On the other hand, it's very practical for me to attempt attaching
certain functions in order, until one of them works.
Besides, that's not necessarily the only library functionality that
could be built on top of this extra introspective power.
This is just one usage example, of course, but I'll try to illustrate
what one possible application would look like:
```
size_t attach_with_fallback(bpf_object_skeleton *skeleton,
std::initializer_list<size_t> probes) {
for (size_t i: probes) {
bpf_link *link = bpf_program__attach(*skeleton->progs[i].prog);
if (!libbpf_get_error(link)) {
return i;
}
}
return skeleton->prog_cnt;
}
// ...
CHECK_ATTACHED(
attach_with_fallback(
obj->skeleton,
{
my_ebpf::prog_index::on_prepare_task_switch,
my_ebpf::prog_index::on_prepare_task_switch_isra_0,
my_ebpf::prog_index::on_finish_task_switch,
my_ebpf::prog_index::on_finish_task_switch_isra_0
}
)
);
CHECK_ATTACHED(
attach_with_fallback(
obj->skeleton,
{
my_ebpf::prog_index::on_tcp_recvmsg,
my_ebpf::prog_index::on_tcp_recvmsg_pre_5_19
}
)
);
```
Thanks for the explanation, but I think I'm still confused :-)
You mention 'attach' and that the kernel will not let you attach,
but isn't 'attach' too late? Kernel should not let you load the program if
the
function signature doesn't match, so you need to have a load_with_fallback?
But regardless, you should be able to achieve it without any new code it
seems:
bool attach_with_fallback(std::initializer_list<struct bpf_program *>
progs) {
for (auto p i: progs) {
bpf_link *link = bpf_program__attach(p);
if (!libbpf_get_error(link)) {
return false;
}
}
return true;
}
CHECK_ATTACHED(
attach_with_fallback(
obj->skeleton,
{
bpf_object__find_program_by_name(obj->skeleton->obj, "on_tcp_recvmsg"),
bpf_object__find_program_by_name(obj->skeleton->obj, "on_tcp_recvmsg_5_19"),
}
)
Would something like this work for you?
-mj
On Thu, Sep 8, 2022 at 5:03 PM <sdf@xxxxxxxxxx> wrote:
>
> On 09/08, Marcelo Juchem wrote:
> > The skeleton generated by `bpftool` makes it easy to attach and load
bpf
> > objects as a whole. Some BPF programs are not directly portable across
> > kernel
> > versions, though, and require some cherry-picking on which programs to
> > load/attach. The skeleton makes this cherry-picking possible, but not
> > entirely
> > friendly in some cases.
>
> > For example, an useful feature is `attach_with_fallback` so that one
> > program can be attempted, and fallback programs tried subsequently
until
> > one works (think `tcp_recvmsg` interface changing on kernel 5.19).
>
> > Being able to represent a set of probes programatically in a way that
is
> > both
> > descriptive, compile-time validated, runtime efficient and custom
library
> > friendly is quite desirable for application developers. A very simple
way
> > to
> > represent a set of probes is with an array of indices.
>
> > This patch creates a couple of enums under the `__cplusplus` section
to
> > represent the program and map indices inside the skeleton object, that
> > can be
> > used to refer to the proper program/map object.
>
> > This is the code generated for the `__cplusplus` section of
> > `profiler.skel.h`:
> > ```
> > enum map_idxs: size_t {
> > events = 0,
> > fentry_readings = 1,
> > accum_readings = 2,
> > counts = 3,
> > rodata = 4
> > };
> > enum prog_idxs: size_t {
> > fentry_XXX = 0,
> > fexit_XXX = 1
> > };
> > static inline struct profiler_bpf *open(const struct
> > bpf_object_open_opts *opts = nullptr);
> > static inline struct profiler_bpf *open_and_load();
> > static inline int load(struct profiler_bpf *skel);
> > static inline int attach(struct profiler_bpf *skel);
> > static inline void detach(struct profiler_bpf *skel);
> > static inline void destroy(struct profiler_bpf *skel);
> > static inline const void *elf_bytes(size_t *sz);
> > ```
> > ---
> > src/gen.c | 32 ++++++++++++++++++++++++++++++++
> > 1 file changed, 32 insertions(+)
>
> > diff --git a/src/gen.c b/src/gen.c
> > index 7070dcf..7e28dc7 100644
> > --- a/src/gen.c
> > +++ b/src/gen.c
> > @@ -1086,6 +1086,38 @@ static int do_skeleton(int argc, char **argv)
> > \n\
>
>
\n\
> > #ifdef
__cplusplus \n\
> > + "
> > + );
> > +
>
> [..]
>
> > + {
> > + size_t i = 0;
> > + printf("\tenum map_index: size_t {");
> > + bpf_object__for_each_map(map, obj) {
> > + if (!get_map_ident(map, ident, sizeof(ident)))
> > + continue;
> > + if (i) {
> > + printf(",");
> > + }
> > + printf("\n\t\t%s = %lu", ident, i);
> > + ++i;
> > + }
> > + printf("\n\t};\n");
> > + }
> > + {
> > + size_t i = 0;
> > + printf("\tenum prog_index: size_t {");
> > + bpf_object__for_each_program(prog, obj) {
> > + if (i) {
> > + printf(",");
> > + }
> > + printf("\n\t\t%s = %lu",
bpf_program__name(prog), i);
> > + ++i;
> > + }
> > + printf("\n\t};\n");
> > + }
>
> I might be missing something, but what prevents you from calling these
> on the skeleton's bpf_object?
>
> skel = xxx__open();
>
> bpf_object__for_each_map(map, skel->obj) {
> // do whatever you want here to test whether it's loadable or not
> }
>
> // same for bpf_object__for_each_program
>
> xxx__load(skel);
>
> How do these new enums help?