From: Alexei Starovoitov <ast@xxxxxxxxxx> v2->v3: - switched to rcu_trace - added bpf_copy_from_user Here is 'perf report' differences: sleepable with SRCU: 3.86% bench [k] __srcu_read_unlock 3.22% bench [k] __srcu_read_lock 0.92% bench [k] bpf_prog_740d4210cdcd99a3_bench_trigger_fentry_sleep 0.50% bench [k] bpf_trampoline_10297 0.26% bench [k] __bpf_prog_exit_sleepable 0.21% bench [k] __bpf_prog_enter_sleepable sleepable with RCU_TRACE: 0.79% bench [k] bpf_prog_740d4210cdcd99a3_bench_trigger_fentry_sleep 0.72% bench [k] bpf_trampoline_10381 0.31% bench [k] __bpf_prog_exit_sleepable 0.29% bench [k] __bpf_prog_enter_sleepable non-sleepable with RCU: 0.88% bench [k] bpf_prog_740d4210cdcd99a3_bench_trigger_fentry 0.84% bench [k] bpf_trampoline_10297 0.13% bench [k] __bpf_prog_enter 0.12% bench [k] __bpf_prog_exit Happy to confirm that rcu_trace overhead is negligible. v1->v2: - split fmod_ret fix into separate patch - added blacklist v1: This patch set introduces the minimal viable support for sleepable bpf programs. In this patch only fentry/fexit/fmod_ret and lsm progs can be sleepable. Only array and pre-allocated hash and lru maps allowed. Alexei Starovoitov (4): bpf: Introduce sleepable BPF programs bpf: Add bpf_copy_from_user() helper. libbpf: support sleepable progs selftests/bpf: basic sleepable tests arch/x86/net/bpf_jit_comp.c | 32 ++++++---- include/linux/bpf.h | 4 ++ include/uapi/linux/bpf.h | 19 +++++- init/Kconfig | 1 + kernel/bpf/arraymap.c | 6 ++ kernel/bpf/hashtab.c | 20 ++++-- kernel/bpf/helpers.c | 22 +++++++ kernel/bpf/syscall.c | 13 +++- kernel/bpf/trampoline.c | 37 ++++++++--- kernel/bpf/verifier.c | 62 ++++++++++++++++++- kernel/trace/bpf_trace.c | 2 + tools/include/uapi/linux/bpf.h | 19 +++++- tools/lib/bpf/libbpf.c | 25 +++++++- tools/testing/selftests/bpf/bench.c | 2 + .../selftests/bpf/benchs/bench_trigger.c | 17 +++++ tools/testing/selftests/bpf/progs/lsm.c | 14 ++++- .../selftests/bpf/progs/trigger_bench.c | 7 +++ 17 files changed, 268 insertions(+), 34 deletions(-) -- 2.23.0