On Fri, Jan 5, 2024 at 2:34 AM Leon Hwang <hffilwlqm@xxxxxxxxx> wrote: > > > > On 5/1/24 12:15, Alexei Starovoitov wrote: > > On Thu, Jan 4, 2024 at 6:23 AM Leon Hwang <hffilwlqm@xxxxxxxxx> wrote: > >> > >> > >> diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c > >> index fe30b9ebb8de4..67fa337fc2e0c 100644 > >> --- a/arch/x86/net/bpf_jit_comp.c > >> +++ b/arch/x86/net/bpf_jit_comp.c > >> @@ -259,7 +259,7 @@ struct jit_context { > >> /* Number of bytes emit_patch() needs to generate instructions */ > >> #define X86_PATCH_SIZE 5 > >> /* Number of bytes that will be skipped on tailcall */ > >> -#define X86_TAIL_CALL_OFFSET (11 + ENDBR_INSN_SIZE) > >> +#define X86_TAIL_CALL_OFFSET (22 + ENDBR_INSN_SIZE) > >> > >> static void push_r12(u8 **pprog) > >> { > >> @@ -406,14 +406,21 @@ static void emit_prologue(u8 **pprog, u32 stack_depth, bool ebpf_from_cbpf, > >> */ > >> emit_nops(&prog, X86_PATCH_SIZE); > >> if (!ebpf_from_cbpf) { > >> - if (tail_call_reachable && !is_subprog) > >> + if (tail_call_reachable && !is_subprog) { > >> /* When it's the entry of the whole tailcall context, > >> * zeroing rax means initialising tail_call_cnt. > >> */ > >> - EMIT2(0x31, 0xC0); /* xor eax, eax */ > >> - else > >> - /* Keep the same instruction layout. */ > >> - EMIT2(0x66, 0x90); /* nop2 */ > >> + EMIT2(0x31, 0xC0); /* xor eax, eax */ > >> + EMIT1(0x50); /* push rax */ > >> + /* Make rax as ptr that points to tail_call_cnt. */ > >> + EMIT3(0x48, 0x89, 0xE0); /* mov rax, rsp */ > >> + EMIT1_off32(0xE8, 2); /* call main prog */ > >> + EMIT1(0x59); /* pop rcx, get rid of tail_call_cnt */ > >> + EMIT1(0xC3); /* ret */ > >> + } else { > >> + /* Keep the same instruction size. */ > >> + emit_nops(&prog, 13); > >> + } > > > > I'm afraid the extra call breaks stack unwinding and many other things. > > The proper frame needs to be setup (push rbp; etc) > > and 'leave' + emit_return() is used. > > Plain 'ret' is not ok. > > x86_call_depth_emit_accounting() needs to be used too. > > That will make X86_TAIL_CALL_OFFSET adjustment very complicated. > > Also the fix doesn't address the stack size issue. > > We shouldn't allow all the extra frames at run-time. > > > > The tail_cnt_ptr approach is interesting but too heavy, > > since arm64, s390 and other JITs would need to repeat it with equally > > complicated calculations in TAIL_CALL_OFFSET. > > > > The fix should really be thought through for all JITs. Not just x86. > > > > I'm thinking whether we should do the following instead: > > > > diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c > > index 0bdbbbeab155..0b45571559be 100644 > > --- a/kernel/bpf/arraymap.c > > +++ b/kernel/bpf/arraymap.c > > @@ -910,7 +910,7 @@ static void *prog_fd_array_get_ptr(struct bpf_map *map, > > if (IS_ERR(prog)) > > return prog; > > > > - if (!bpf_prog_map_compatible(map, prog)) { > > + if (!bpf_prog_map_compatible(map, prog) || prog->aux->func_cnt) { > > bpf_prog_put(prog); > > return ERR_PTR(-EINVAL); > > } > > > > This will stop stack growth, but it will break a few existing tests. > > I feel it's a price worth paying. > > I don't think this can avoid this issue completely. > > For example: > > #include "vmlinux.h" > > #include "bpf_helpers.h" > > struct { > __uint(type, BPF_MAP_TYPE_PROG_ARRAY); > __uint(max_entries, 1); > __uint(key_size, sizeof(__u32)); > __uint(value_size, sizeof(__u32)); > } prog_array SEC(".maps"); > > > static __noinline int > subprog(struct __sk_buff *skb) > { > volatile int retval = 0; > > bpf_tail_call(skb, &prog_array, 0); > > return retval; > } > > SEC("tc") > int entry(struct __sk_buff *skb) > { > const int N = 10000; > > for (int i = 0; i < N; i++) > subprog(skb); > > return 0; > } > > char _license[] SEC("license") = "GPL"; > > Then, objdump its asm: > > Disassembly of section .text: > > 0000000000000000 <subprog>: > ; { > 0: b7 02 00 00 00 00 00 00 r2 = 0x0 > ; volatile int retval = 0; > 1: 63 2a fc ff 00 00 00 00 *(u32 *)(r10 - 0x4) = r2 > ; bpf_tail_call(skb, &prog_array, 0); > 2: 18 02 00 00 00 00 00 00 00 00 00 00 00 00 00 00 r2 = 0x0 ll > 4: b7 03 00 00 00 00 00 00 r3 = 0x0 > 5: 85 00 00 00 0c 00 00 00 call 0xc > ; return retval; > 6: 61 a1 fc ff 00 00 00 00 r1 = *(u32 *)(r10 - 0x4) > 7: 95 00 00 00 00 00 00 00 exit > > Disassembly of section tc: > > 0000000000000000 <entry>: > ; { > 0: bf 16 00 00 00 00 00 00 r6 = r1 > 1: b7 07 00 00 10 27 00 00 r7 = 0x2710 > > 0000000000000010 <LBB0_1>: > ; subprog(skb); > 2: bf 61 00 00 00 00 00 00 r1 = r6 > 3: 85 10 00 00 ff ff ff ff call -0x1 > ; for (int i = 0; i < N; i++) > 4: 07 07 00 00 ff ff ff ff r7 += -0x1 > 5: bf 71 00 00 00 00 00 00 r1 = r7 > 6: 67 01 00 00 20 00 00 00 r1 <<= 0x20 > 7: 77 01 00 00 20 00 00 00 r1 >>= 0x20 > 8: 15 01 01 00 00 00 00 00 if r1 == 0x0 goto +0x1 <LBB0_2> > 9: 05 00 f8 ff 00 00 00 00 goto -0x8 <LBB0_1> > > 0000000000000050 <LBB0_2>: > ; return 0; > 10: b7 00 00 00 00 00 00 00 r0 = 0x0 > 11: 95 00 00 00 00 00 00 00 exit > > As a result, the bpf prog in prog_array can be tailcalled for N times, > even though there's no subprog in the bpf prog in prog_array. You mean that total execution time is N*N ? and tailcall is a way to increase loop count? We allow BPF_MAX_LOOPS = 8 * 1024 * 1024 in bpf_loop, so many calls to subprog(skb); is not an issue as long as they don't stall cpu and don't increase stack size.