On Thu, 3 Oct 2024 11:16:34 -0400 Mathieu Desnoyers <mathieu.desnoyers@xxxxxxxxxxxx> wrote: > In preparation for allowing system call enter/exit instrumentation to > handle page faults, make sure that bpf can handle this change by > explicitly disabling preemption within the bpf system call tracepoint > probes to respect the current expectations within bpf tracing code. > > This change does not yet allow bpf to take page faults per se within its > probe, but allows its existing probes to adapt to the upcoming change. > I guess the BPF folks should state if this is needed or not? Does the BPF hooks into the tracepoints expect preemption to be disabled when called? -- Steve > Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@xxxxxxxxxxxx> > Acked-by: Andrii Nakryiko <andrii@xxxxxxxxxx> > Tested-by: Andrii Nakryiko <andrii@xxxxxxxxxx> # BPF parts > Cc: Michael Jeanson <mjeanson@xxxxxxxxxxxx> > Cc: Steven Rostedt <rostedt@xxxxxxxxxxx> > Cc: Masami Hiramatsu <mhiramat@xxxxxxxxxx> > Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx> > Cc: Alexei Starovoitov <ast@xxxxxxxxxx> > Cc: Yonghong Song <yhs@xxxxxx> > Cc: Paul E. McKenney <paulmck@xxxxxxxxxx> > Cc: Ingo Molnar <mingo@xxxxxxxxxx> > Cc: Arnaldo Carvalho de Melo <acme@xxxxxxxxxx> > Cc: Mark Rutland <mark.rutland@xxxxxxx> > Cc: Alexander Shishkin <alexander.shishkin@xxxxxxxxxxxxxxx> > Cc: Namhyung Kim <namhyung@xxxxxxxxxx> > Cc: Andrii Nakryiko <andrii.nakryiko@xxxxxxxxx> > Cc: bpf@xxxxxxxxxxxxxxx > Cc: Joel Fernandes <joel@xxxxxxxxxxxxxxxxx> > --- > include/trace/bpf_probe.h | 11 ++++++++++- > 1 file changed, 10 insertions(+), 1 deletion(-) > > diff --git a/include/trace/bpf_probe.h b/include/trace/bpf_probe.h > index c85bbce5aaa5..211b98d45fc6 100644 > --- a/include/trace/bpf_probe.h > +++ b/include/trace/bpf_probe.h > @@ -53,8 +53,17 @@ __bpf_trace_##call(void *__data, proto) \ > #define DECLARE_EVENT_CLASS(call, proto, args, tstruct, assign, print) \ > __BPF_DECLARE_TRACE(call, PARAMS(proto), PARAMS(args)) > > +#define __BPF_DECLARE_TRACE_SYSCALL(call, proto, args) \ > +static notrace void \ > +__bpf_trace_##call(void *__data, proto) \ > +{ \ > + guard(preempt_notrace)(); \ > + CONCATENATE(bpf_trace_run, COUNT_ARGS(args))(__data, CAST_TO_U64(args)); \ > +} > + > #undef DECLARE_EVENT_SYSCALL_CLASS > -#define DECLARE_EVENT_SYSCALL_CLASS DECLARE_EVENT_CLASS > +#define DECLARE_EVENT_SYSCALL_CLASS(call, proto, args, tstruct, assign, print) \ > + __BPF_DECLARE_TRACE_SYSCALL(call, PARAMS(proto), PARAMS(args)) > > /* > * This part is compiled out, it is only here as a build time check