> On Feb 12, 2020, at 9:34 AM, Andrii Nakryiko <andrii.nakryiko@xxxxxxxxx> wrote: > > On Wed, Feb 12, 2020 at 4:32 AM Eelco Chaudron <echaudro@xxxxxxxxxx> wrote: >> >> Currently when you want to attach a trace program to a bpf program >> the section name needs to match the tracepoint/function semantics. >> >> However the addition of the bpf_program__set_attach_target() API >> allows you to specify the tracepoint/function dynamically. >> >> The call flow would look something like this: >> >> xdp_fd = bpf_prog_get_fd_by_id(id); >> trace_obj = bpf_object__open_file("func.o", NULL); >> prog = bpf_object__find_program_by_title(trace_obj, >> "fentry/myfunc"); >> bpf_program__set_attach_target(prog, xdp_fd, >> "fentry/xdpfilt_blk_all"); >> bpf_object__load(trace_obj) >> >> Signed-off-by: Eelco Chaudron <echaudro@xxxxxxxxxx> I am trying to solve the same problem with slightly different approach. It works as the following (with skeleton): obj = myobject_bpf__open_opts(&opts); bpf_object__for_each_program(prog, obj->obj) bpf_program__overwrite_section_name(prog, new_names[id++]); err = myobject_bpf__load(obj); I don't have very strong preference. But I think my approach is simpler? Thanks, Song diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index 514b1a524abb..4c29a7181d57 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -238,6 +238,8 @@ struct bpf_program { __u32 line_info_rec_size; __u32 line_info_cnt; __u32 prog_flags; + + char *overwritten_section_name; }; struct bpf_struct_ops { @@ -442,6 +444,7 @@ static void bpf_program__exit(struct bpf_program *prog) zfree(&prog->pin_name); zfree(&prog->insns); zfree(&prog->reloc_desc); + zfree(&prog->overwritten_section_name); prog->nr_reloc = 0; prog->insns_cnt = 0; @@ -6637,7 +6640,7 @@ static int libbpf_find_attach_btf_id(struct bpf_program *prog) { enum bpf_attach_type attach_type = prog->expected_attach_type; __u32 attach_prog_fd = prog->attach_prog_fd; - const char *name = prog->section_name; + const char *name = prog->overwritten_section_name ? : prog->section_name; int i, err; if (!name) @@ -8396,3 +8399,11 @@ void bpf_object__destroy_skeleton(struct bpf_object_skeleton *s) free(s->progs); free(s); } + +char *bpf_program__overwrite_section_name(struct bpf_program *prog, + const char *sec_name) +{ + prog->overwritten_section_name = strdup(sec_name); + + return prog->overwritten_section_name; +} diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h index 3fe12c9d1f92..02f0d8b57cc4 100644 --- a/tools/lib/bpf/libbpf.h +++ b/tools/lib/bpf/libbpf.h @@ -595,6 +595,10 @@ bpf_program__bpil_addr_to_offs(struct bpf_prog_info_linear *info_linear); LIBBPF_API void bpf_program__bpil_offs_to_addr(struct bpf_prog_info_linear *info_linear); +LIBBPF_API char * +bpf_program__overwrite_section_name(struct bpf_program *prog, + const char *sec_name); + /* * A helper function to get the number of possible CPUs before looking up * per-CPU maps. Negative errno is returned on failure. diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map index b035122142bb..ed26c20729db 100644 --- a/tools/lib/bpf/libbpf.map +++ b/tools/lib/bpf/libbpf.map @@ -235,3 +235,8 @@ LIBBPF_0.0.7 { btf__align_of; libbpf_find_kernel_btf; } LIBBPF_0.0.6; + +LIBBPF_0.0.8 { + global: + bpf_program__overwrite_section_name; +} LIBBPF_0.0.7;