This is a note to let you know that I've just added the patch titled bpf: Fix off-by-one in tail call count limiting to the 4.19-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: bpf-fix-off-by-one-in-tail-call-count-limiting.patch and it can be found in the queue-4.19 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. commit bc0fccc371e8e360c158d0a4704fce6c3c17f7b4 Author: Johan Almbladh <johan.almbladh@xxxxxxxxxxxxxxxxx> Date: Wed Jul 28 18:47:41 2021 +0200 bpf: Fix off-by-one in tail call count limiting [ Upstream commit b61a28cf11d61f512172e673b8f8c4a6c789b425 ] Before, the interpreter allowed up to MAX_TAIL_CALL_CNT + 1 tail calls. Now precisely MAX_TAIL_CALL_CNT is allowed, which is in line with the behavior of the x86 JITs. Signed-off-by: Johan Almbladh <johan.almbladh@xxxxxxxxxxxxxxxxx> Signed-off-by: Andrii Nakryiko <andrii@xxxxxxxxxx> Acked-by: Yonghong Song <yhs@xxxxxx> Link: https://lore.kernel.org/bpf/20210728164741.350370-1-johan.almbladh@xxxxxxxxxxxxxxxxx Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx> diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 341402bc1202..5a417309cc2d 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -1200,7 +1200,7 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack) if (unlikely(index >= array->map.max_entries)) goto out; - if (unlikely(tail_call_cnt > MAX_TAIL_CALL_CNT)) + if (unlikely(tail_call_cnt >= MAX_TAIL_CALL_CNT)) goto out; tail_call_cnt++;