During runtime unwinding and cleanup, we will need to figure out where the callee saved registers are stored on the stack, so that when a bpf_thtrow call is made, all frames can release their callee saved registers by finding their saved copies on the stack of callee frames. While the previous patch ensured any BPF callee saved registers are saved on a hidden subprog stack frame before entry into kernel (where we would not know their location if spilled), there are cases where a subprog's R6-R9 are not spilled into its immediate callee stack frame, but much later in the call chain in some later callee stack frame. As such, we would need to figure out while walking down the stack which frames have spilled their incoming callee saved regs, and thus keep track of where the latest spill would have happened with respect to a given frame in the stack trace. To perform this, we would need to know which callee saved registers are saved by a given subprog at runtime during the unwinding phase. Right now, there is a convenient way the x86 JIT figures this out in detect_reg_usage. Utilize such logic in verifier core, and copy this information to bpf_prog_aux struct before the JIT step to preserve this information at runtime, through bpf_prog_aux. Signed-off-by: Kumar Kartikeya Dwivedi <memxor@xxxxxxxxx> --- include/linux/bpf.h | 1 + include/linux/bpf_verifier.h | 1 + kernel/bpf/verifier.c | 10 ++++++++++ 3 files changed, 12 insertions(+) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 83cff18a1b66..4ac6add0cec8 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1460,6 +1460,7 @@ struct bpf_prog_aux { bool xdp_has_frags; bool exception_cb; bool exception_boundary; + bool callee_regs_used[4]; /* BTF_KIND_FUNC_PROTO for valid attach_btf_id */ const struct btf_type *attach_func_proto; /* function name for valid attach_btf_id */ diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index 04e27fce33d6..e08ff540ec44 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -620,6 +620,7 @@ struct bpf_subprog_info { u32 start; /* insn idx of function entry point */ u32 linfo_idx; /* The idx to the main_prog->aux->linfo */ u16 stack_depth; /* max. stack depth used by this function */ + bool callee_regs_used[4]; bool has_tail_call: 1; bool tail_call_reachable: 1; bool has_ld_abs: 1; diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 942243cba9f1..aeaf97b0a749 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -2942,6 +2942,15 @@ static int check_subprogs(struct bpf_verifier_env *env) insn[i].src_reg == 0 && insn[i].imm == BPF_FUNC_tail_call) subprog[cur_subprog].has_tail_call = true; + /* Collect callee regs used in the subprog. */ + if (insn[i].dst_reg == BPF_REG_6 || insn[i].src_reg == BPF_REG_6) + subprog[cur_subprog].callee_regs_used[0] = true; + if (insn[i].dst_reg == BPF_REG_7 || insn[i].src_reg == BPF_REG_7) + subprog[cur_subprog].callee_regs_used[1] = true; + if (insn[i].dst_reg == BPF_REG_8 || insn[i].src_reg == BPF_REG_8) + subprog[cur_subprog].callee_regs_used[2] = true; + if (insn[i].dst_reg == BPF_REG_9 || insn[i].src_reg == BPF_REG_9) + subprog[cur_subprog].callee_regs_used[3] = true; if (!env->seen_throw_insn && is_bpf_throw_kfunc(&insn[i])) env->seen_throw_insn = true; if (BPF_CLASS(code) == BPF_LD && @@ -19501,6 +19510,7 @@ static int jit_subprogs(struct bpf_verifier_env *env) } func[i]->aux->num_exentries = num_exentries; func[i]->aux->tail_call_reachable = env->subprog_info[i].tail_call_reachable; + memcpy(&func[i]->aux->callee_regs_used, env->subprog_info[i].callee_regs_used, sizeof(func[i]->aux->callee_regs_used)); func[i]->aux->exception_cb = env->subprog_info[i].is_exception_cb; if (!i) func[i]->aux->exception_boundary = env->seen_exception; -- 2.40.1