Add a might_fault() check to validate that the bpf sys_enter/sys_exit probe callbacks are indeed called from a context where page faults can be handled. Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@xxxxxxxxxxxx> Acked-by: Andrii Nakryiko <andrii@xxxxxxxxxx> Tested-by: Andrii Nakryiko <andrii@xxxxxxxxxx> # BPF parts Cc: Michael Jeanson <mjeanson@xxxxxxxxxxxx> Cc: Steven Rostedt <rostedt@xxxxxxxxxxx> Cc: Masami Hiramatsu <mhiramat@xxxxxxxxxx> Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx> Cc: Alexei Starovoitov <ast@xxxxxxxxxx> Cc: Yonghong Song <yhs@xxxxxx> Cc: Paul E. McKenney <paulmck@xxxxxxxxxx> Cc: Ingo Molnar <mingo@xxxxxxxxxx> Cc: Arnaldo Carvalho de Melo <acme@xxxxxxxxxx> Cc: Mark Rutland <mark.rutland@xxxxxxx> Cc: Alexander Shishkin <alexander.shishkin@xxxxxxxxxxxxxxx> Cc: Namhyung Kim <namhyung@xxxxxxxxxx> Cc: Andrii Nakryiko <andrii.nakryiko@xxxxxxxxx> Cc: bpf@xxxxxxxxxxxxxxx Cc: Joel Fernandes <joel@xxxxxxxxxxxxxxxxx> --- include/trace/bpf_probe.h | 1 + 1 file changed, 1 insertion(+) diff --git a/include/trace/bpf_probe.h b/include/trace/bpf_probe.h index 211b98d45fc6..099df5c3e38a 100644 --- a/include/trace/bpf_probe.h +++ b/include/trace/bpf_probe.h @@ -57,6 +57,7 @@ __bpf_trace_##call(void *__data, proto) \ static notrace void \ __bpf_trace_##call(void *__data, proto) \ { \ + might_fault(); \ guard(preempt_notrace)(); \ CONCATENATE(bpf_trace_run, COUNT_ARGS(args))(__data, CAST_TO_U64(args)); \ } -- 2.39.2