On 7/28/21 9:47 AM, Johan Almbladh wrote:
Before, the interpreter allowed up to MAX_TAIL_CALL_CNT + 1 tail calls.
Now precisely MAX_TAIL_CALL_CNT is allowed, which is in line with the
behavior of the x86 JITs.
Signed-off-by: Johan Almbladh <johan.almbladh@xxxxxxxxxxxxxxxxx>
LGTM.
Acked-by: Yonghong Song <yhs@xxxxxx>
I also checked arm/arm64 jit. I saw the following comments:
/* if (tail_call_cnt > MAX_TAIL_CALL_CNT)
* goto out;
* tail_call_cnt++;
*/
Maybe we have this MAX_TAIL_CALL_CNT + 1 issue
for arm/arm64 jit?
---
kernel/bpf/core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 9b1577498373..67682b3afc84 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -1559,7 +1559,7 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn)
if (unlikely(index >= array->map.max_entries))
goto out;
- if (unlikely(tail_call_cnt > MAX_TAIL_CALL_CNT))
+ if (unlikely(tail_call_cnt >= MAX_TAIL_CALL_CNT))
goto out;
tail_call_cnt++;