Sasha, This patch should not be applied to any of the stable kernels. It was reverted in f9dabe016b63 ("bpf: Undo off-by-one in interpreter tail call count limit"). I don't think it will pass the CI selftests so maybe it wouldn't be applied anyway, but nevertheless I want to inform you about it. Johan On Thu, Sep 9, 2021 at 1:43 PM Sasha Levin <sashal@xxxxxxxxxx> wrote: > > From: Johan Almbladh <johan.almbladh@xxxxxxxxxxxxxxxxx> > > [ Upstream commit b61a28cf11d61f512172e673b8f8c4a6c789b425 ] > > Before, the interpreter allowed up to MAX_TAIL_CALL_CNT + 1 tail calls. > Now precisely MAX_TAIL_CALL_CNT is allowed, which is in line with the > behavior of the x86 JITs. > > Signed-off-by: Johan Almbladh <johan.almbladh@xxxxxxxxxxxxxxxxx> > Signed-off-by: Andrii Nakryiko <andrii@xxxxxxxxxx> > Acked-by: Yonghong Song <yhs@xxxxxx> > Link: https://lore.kernel.org/bpf/20210728164741.350370-1-johan.almbladh@xxxxxxxxxxxxxxxxx > Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx> > --- > kernel/bpf/core.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c > index 0a28a8095d3e..82af6279992d 100644 > --- a/kernel/bpf/core.c > +++ b/kernel/bpf/core.c > @@ -1564,7 +1564,7 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn) > > if (unlikely(index >= array->map.max_entries)) > goto out; > - if (unlikely(tail_call_cnt > MAX_TAIL_CALL_CNT)) > + if (unlikely(tail_call_cnt >= MAX_TAIL_CALL_CNT)) > goto out; > > tail_call_cnt++; > -- > 2.30.2 >