On Wed, Sep 02, 2020 at 10:08:13PM +0200, Maciej Fijalkowski wrote: > Protect against potential stack overflow that might happen when bpf2bpf > calls get combined with tailcalls. Limit the caller's stack depth for > such case down to 256 so that the worst case scenario would result in 8k > stack size (32 which is tailcall limit * 256 = 8k). > > Suggested-by: Alexei Starovoitov <ast@xxxxxxxxxx> > Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@xxxxxxxxx> > --- > include/linux/bpf_verifier.h | 1 + > kernel/bpf/verifier.c | 28 ++++++++++++++++++++++++++++ > 2 files changed, 29 insertions(+) > > diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h > index 53c7bd568c5d..5026b75db972 100644 > --- a/include/linux/bpf_verifier.h > +++ b/include/linux/bpf_verifier.h > @@ -358,6 +358,7 @@ struct bpf_subprog_info { > u32 start; /* insn idx of function entry point */ > u32 linfo_idx; /* The idx to the main_prog->aux->linfo */ > u16 stack_depth; /* max. stack depth used by this function */ > + bool has_tail_call; > }; > > /* single container for all structs > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c > index 8f9e95f5f73f..b12527d87edb 100644 > --- a/kernel/bpf/verifier.c > +++ b/kernel/bpf/verifier.c > @@ -1490,6 +1490,8 @@ static int check_subprogs(struct bpf_verifier_env *env) > for (i = 0; i < insn_cnt; i++) { > u8 code = insn[i].code; > > + if (insn[i].imm == BPF_FUNC_tail_call) > + subprog[cur_subprog].has_tail_call = true; It will randomly match on other opcodes. This check probably should be moved few lines down after BPF_JMP && BPF_CALL && insn->src_reg != BPF_PSEUDO_CALL. Another option would be to move it to check_helper_call(), since it already matches on: if (func_id == BPF_FUNC_tail_call) { err = check_reference_leak(env); but adding find_subprog() there to mark seems less efficient than doing it during check_subprogs(). > if (BPF_CLASS(code) != BPF_JMP && BPF_CLASS(code) != BPF_JMP32) > goto next; > if (BPF_OP(code) == BPF_EXIT || BPF_OP(code) == BPF_CALL)