> On Nov 3, 2022, at 12:45 PM, Yonghong Song <yhs@xxxxxxxx> wrote: > > > > On 11/1/22 3:02 AM, Jiri Olsa wrote: >> On Mon, Oct 31, 2022 at 10:23:39PM -0700, Namhyung Kim wrote: >>> The bpf_perf_event_read_sample() helper is to get the specified sample >>> data (by using PERF_SAMPLE_* flag in the argument) from BPF to make a >>> decision for filtering on samples. Currently PERF_SAMPLE_IP and >>> PERF_SAMPLE_DATA flags are supported only. >>> >>> Signed-off-by: Namhyung Kim <namhyung@xxxxxxxxxx> >>> --- >>> include/uapi/linux/bpf.h | 23 ++++++++++++++++ >>> kernel/trace/bpf_trace.c | 49 ++++++++++++++++++++++++++++++++++ >>> tools/include/uapi/linux/bpf.h | 23 ++++++++++++++++ >>> 3 files changed, 95 insertions(+) >>> >>> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h >>> index 94659f6b3395..cba501de9373 100644 >>> --- a/include/uapi/linux/bpf.h >>> +++ b/include/uapi/linux/bpf.h >>> @@ -5481,6 +5481,28 @@ union bpf_attr { >>> * 0 on success. >>> * >>> * **-ENOENT** if the bpf_local_storage cannot be found. >>> + * >>> + * long bpf_perf_event_read_sample(struct bpf_perf_event_data *ctx, void *buf, u32 size, u64 sample_flags) >>> + * Description >>> + * For an eBPF program attached to a perf event, retrieve the >>> + * sample data associated to *ctx* and store it in the buffer >>> + * pointed by *buf* up to size *size* bytes. >>> + * >>> + * The *sample_flags* should contain a single value in the >>> + * **enum perf_event_sample_format**. >>> + * Return >>> + * On success, number of bytes written to *buf*. On error, a >>> + * negative value. >>> + * >>> + * The *buf* can be set to **NULL** to return the number of bytes >>> + * required to store the requested sample data. >>> + * >>> + * **-EINVAL** if *sample_flags* is not a PERF_SAMPLE_* flag. >>> + * >>> + * **-ENOENT** if the associated perf event doesn't have the data. >>> + * >>> + * **-ENOSYS** if system doesn't support the sample data to be >>> + * retrieved. >>> */ >>> #define ___BPF_FUNC_MAPPER(FN, ctx...) \ >>> FN(unspec, 0, ##ctx) \ >>> @@ -5695,6 +5717,7 @@ union bpf_attr { >>> FN(user_ringbuf_drain, 209, ##ctx) \ >>> FN(cgrp_storage_get, 210, ##ctx) \ >>> FN(cgrp_storage_delete, 211, ##ctx) \ >>> + FN(perf_event_read_sample, 212, ##ctx) \ >>> /* */ >>> /* backwards-compatibility macros for users of __BPF_FUNC_MAPPER that don't >>> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c >>> index ce0228c72a93..befd937afa3c 100644 >>> --- a/kernel/trace/bpf_trace.c >>> +++ b/kernel/trace/bpf_trace.c >>> @@ -28,6 +28,7 @@ >>> #include <uapi/linux/bpf.h> >>> #include <uapi/linux/btf.h> >>> +#include <uapi/linux/perf_event.h> >>> #include <asm/tlb.h> >>> @@ -1743,6 +1744,52 @@ static const struct bpf_func_proto bpf_read_branch_records_proto = { >>> .arg4_type = ARG_ANYTHING, >>> }; >>> +BPF_CALL_4(bpf_perf_event_read_sample, struct bpf_perf_event_data_kern *, ctx, >>> + void *, buf, u32, size, u64, flags) >>> +{ >> I wonder we could add perf_btf (like we have tp_btf) program type that >> could access ctx->data directly without helpers > > Martin and I have discussed an idea to introduce a generic helper like > bpf_get_kern_ctx(void *ctx) > Given a context, the helper will return a PTR_TO_BTF_ID representing the > corresponding kernel ctx. So in the above example, user could call > > struct bpf_perf_event_data_kern *kctx = bpf_get_kern_ctx(ctx); > ... This is an interesting idea! > To implement bpf_get_kern_ctx helper, the verifier can find the type > of the context and provide a hidden btf_id as the second parameter of > the actual kernel helper function like > bpf_get_kern_ctx(ctx) { > return ctx; > } > /* based on ctx_btf_id, find kctx_btf_id and return it to verifier */ I think we will need a map of ctx_btf_id => kctx_btf_id. Shall we somehow expose this to the user? Thanks, Song > The bpf_get_kern_ctx helper can be inlined as well.