In order to prevent deadlock the verifier currently disallows any function calls under bpf_spin_lock save for a small set of allowlisted helpers/kfuncs. A BPF program that calls destructive kfuncs might be trying to cause deadlock, and regardless is understood to be capable of causing system breakage of similar severity. Per kfuncs.rst: The KF_DESTRUCTIVE flag is used to indicate functions calling which is destructive to the system. For example such a call can result in system rebooting or panicking. Due to this additional restrictions apply to these calls. Preventing BPF programs from crashing or otherwise blowing up the system is generally the verifier's goal, but destructive kfuncs might have such a state be their intended result. Preventing KF_DESTRUCTIVE kfunc calls under spinlock with the goal of safety is therefore unnecessarily strict. This patch modifies the "function calls are not allowed while holding a lock" check to allow calling destructive kfuncs with an active_lock. The motivating usecase for this change - unsafe locking of bpf_spin_locks for easy testing of race conditions - is implemented in the next two patches in the series. Note that the removed insn->off check was rejecting any calls to kfuncs defined in non-vmlinux BTF. In order to get the system in a broken or otherwise interesting state for inspection, developers might load a module implementing destructive kfuncs particular to their usecase. The unsafe_spin_{lock, unlock} kfuncs later in this series are a good example: there's no clear reason for them to be in vmlinux as they're specifically for BPF selftests, so they live in bpf_testmod. The check is removed in favor of a newly-added helper function to enable such usecases. Signed-off-by: Dave Marchevsky <davemarchevsky@xxxxxx> --- kernel/bpf/verifier.c | 22 +++++++++++++++++++++- 1 file changed, 21 insertions(+), 1 deletion(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 48c3e2bbcc4a..1bf0e6411feb 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -330,6 +330,11 @@ struct bpf_kfunc_call_arg_meta { u64 mem_size; }; +static int fetch_kfunc_meta(struct bpf_verifier_env *env, + struct bpf_insn *insn, + struct bpf_kfunc_call_arg_meta *meta, + const char **kfunc_name); + struct btf *btf_vmlinux; static DEFINE_MUTEX(bpf_verifier_lock); @@ -10313,6 +10318,21 @@ static bool is_rbtree_lock_required_kfunc(u32 btf_id) return is_bpf_rbtree_api_kfunc(btf_id); } +static bool is_kfunc_callable_in_spinlock(struct bpf_verifier_env *env, + struct bpf_insn *insn) +{ + struct bpf_kfunc_call_arg_meta meta; + + /* insn->off is idx into btf fd_array - 0 for vmlinux btf, else nonzero */ + if (!insn->off && is_bpf_graph_api_kfunc(insn->imm)) + return true; + + if (fetch_kfunc_meta(env, insn, &meta, NULL)) + return false; + + return is_kfunc_destructive(&meta); +} + static bool check_kfunc_is_graph_root_api(struct bpf_verifier_env *env, enum btf_field_type head_field_type, u32 kfunc_btf_id) @@ -16218,7 +16238,7 @@ static int do_check(struct bpf_verifier_env *env) if ((insn->src_reg == BPF_REG_0 && insn->imm != BPF_FUNC_spin_unlock) || (insn->src_reg == BPF_PSEUDO_CALL) || (insn->src_reg == BPF_PSEUDO_KFUNC_CALL && - (insn->off != 0 || !is_bpf_graph_api_kfunc(insn->imm)))) { + !is_kfunc_callable_in_spinlock(env, insn))) { verbose(env, "function calls are not allowed while holding a lock\n"); return -EINVAL; } -- 2.34.1