On Mon, Nov 28, 2022 at 5:29 AM Jiri Olsa <jolsa@xxxxxxxxxx> wrote: > > Adding bpf_vma_build_id_parse function to retrieve build id from > passed vma object and making it available as bpf kfunc. > > We can't use build_id_parse directly as kfunc, because we would > not have control over the build id buffer size provided by user. > > Instead we are adding new bpf_vma_build_id_parse function with > 'build_id__sz' argument that instructs verifier to check for the > available space in build_id buffer. > > This way we check that there's always available memory space > behind build_id pointer. We also check that the build_id__sz is > at least BUILD_ID_SIZE_MAX so we can place any buildid in. > > The bpf_vma_build_id_parse kfunc is marked as KF_TRUSTED_ARGS, > so it can be only called with trusted vma objects. These are > currently provided only by find_vma callback function and > task_vma iterator program. > > Signed-off-by: Jiri Olsa <jolsa@xxxxxxxxxx> > --- > include/linux/bpf.h | 4 ++++ > kernel/trace/bpf_trace.c | 31 +++++++++++++++++++++++++++++++ > 2 files changed, 35 insertions(+) > > diff --git a/include/linux/bpf.h b/include/linux/bpf.h > index c6aa6912ea16..359c8fe11779 100644 > --- a/include/linux/bpf.h > +++ b/include/linux/bpf.h > @@ -2839,4 +2839,8 @@ static inline bool type_is_alloc(u32 type) > return type & MEM_ALLOC; > } > > +int bpf_vma_build_id_parse(struct vm_area_struct *vma, > + unsigned char *build_id, > + size_t build_id__sz); > + > #endif /* _LINUX_BPF_H */ > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c > index 3bbd3f0c810c..7340de74531a 100644 > --- a/kernel/trace/bpf_trace.c > +++ b/kernel/trace/bpf_trace.c > @@ -23,6 +23,7 @@ > #include <linux/sort.h> > #include <linux/key.h> > #include <linux/verification.h> > +#include <linux/buildid.h> > > #include <net/bpf_sk_storage.h> > > @@ -1383,6 +1384,36 @@ static int __init bpf_key_sig_kfuncs_init(void) > late_initcall(bpf_key_sig_kfuncs_init); > #endif /* CONFIG_KEYS */ > > +int bpf_vma_build_id_parse(struct vm_area_struct *vma, > + unsigned char *build_id, > + size_t build_id__sz) > +{ > + __u32 size; > + int err; > + > + if (build_id__sz < BUILD_ID_SIZE_MAX) > + return -EINVAL; > + > + err = build_id_parse(vma, build_id, &size); > + return err ?: (int) size; if err is positive the caller won't be able to distinguish it vs size. > +} > + > +BTF_SET8_START(tracing_btf_ids) > +BTF_ID_FLAGS(func, bpf_vma_build_id_parse, KF_TRUSTED_ARGS) > +BTF_SET8_END(tracing_btf_ids) > + > +static const struct btf_kfunc_id_set tracing_kfunc_set = { > + .owner = THIS_MODULE, > + .set = &tracing_btf_ids, > +}; > + > +static int __init kfunc_tracing_init(void) > +{ > + return register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING, &tracing_kfunc_set); > +} > + > +late_initcall(kfunc_tracing_init); Its own btf_id set and its own late_initcall just for one kfunc? Please reduce this boilerplate code. Move it to kernel/bpf/helpers.c ?