On Wed, Sep 23, 2020 at 9:54 AM Song Liu <songliubraving@xxxxxx> wrote: > > Add .test_run for raw_tracepoint. Also, introduce a new feature that runs > the target program on a specific CPU. This is achieved by a new flag in > bpf_attr.test, cpu_plus. For compatibility, cpu_plus == 0 means run the > program on current cpu, cpu_plus > 0 means run the program on cpu with id > (cpu_plus - 1). This feature is needed for BPF programs that handle > perf_event and other percpu resources, as the program can access these > resource locally. > > Acked-by: John Fastabend <john.fastabend@xxxxxxxxx> > Signed-off-by: Song Liu <songliubraving@xxxxxx> > --- > include/linux/bpf.h | 3 ++ > include/uapi/linux/bpf.h | 5 ++ > kernel/bpf/syscall.c | 2 +- > kernel/trace/bpf_trace.c | 1 + > net/bpf/test_run.c | 88 ++++++++++++++++++++++++++++++++++ > tools/include/uapi/linux/bpf.h | 5 ++ > 6 files changed, 103 insertions(+), 1 deletion(-) > > diff --git a/include/linux/bpf.h b/include/linux/bpf.h > index d7c5a6ed87e30..23758c282eb4b 100644 > --- a/include/linux/bpf.h > +++ b/include/linux/bpf.h > @@ -1376,6 +1376,9 @@ int bpf_prog_test_run_tracing(struct bpf_prog *prog, > int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog, > const union bpf_attr *kattr, > union bpf_attr __user *uattr); > +int bpf_prog_test_run_raw_tp(struct bpf_prog *prog, > + const union bpf_attr *kattr, > + union bpf_attr __user *uattr); > bool btf_ctx_access(int off, int size, enum bpf_access_type type, > const struct bpf_prog *prog, > struct bpf_insn_access_aux *info); > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h > index a22812561064a..89acf41913e70 100644 > --- a/include/uapi/linux/bpf.h > +++ b/include/uapi/linux/bpf.h > @@ -566,6 +566,11 @@ union bpf_attr { > */ > __aligned_u64 ctx_in; > __aligned_u64 ctx_out; > + __u32 cpu_plus; /* run this program on cpu > + * (cpu_plus - 1). > + * If cpu_plus == 0, run on > + * current cpu. > + */ the "_plus" part of the name is so confusing, just as off-by-one semantics.. Why not do what we do with BPF_PROG_ATTACH? I.e., we have flags field, and if the specific bit is set then we use extra field's value. In this case, you'd have: __u32 flags; __u32 cpu; /* naturally 0-based */ cpu indexing will be natural without any offsets, and you'll have something like BPF_PROG_TEST_CPU flag, that needs to be specified. This will work well with backward/forward compatibility. If you need a special "current CPU" mode, you can achieve that by not specifying BPF_PROG_TEST_CPU flag, or we can designate (__u32)-1 as a special "current CPU" value. WDYT? > } test; > > struct { /* anonymous struct used by BPF_*_GET_*_ID */ > diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c > index ec68d3a23a2b7..4664531ff92ea 100644 > --- a/kernel/bpf/syscall.c > +++ b/kernel/bpf/syscall.c > @@ -2975,7 +2975,7 @@ static int bpf_prog_query(const union bpf_attr *attr, > } > } > > -#define BPF_PROG_TEST_RUN_LAST_FIELD test.ctx_out > +#define BPF_PROG_TEST_RUN_LAST_FIELD test.cpu_plus > [...]