Andrii Nakryiko <andrii.nakryiko@xxxxxxxxx> writes: > On Thu, Apr 25, 2024 at 11:56 AM Puranjay Mohan <puranjay@xxxxxxxxxx> wrote: >> >> Andrii Nakryiko <andrii.nakryiko@xxxxxxxxx> writes: >> >> > On Thu, Apr 25, 2024 at 3:14 AM Puranjay Mohan <puranjay@xxxxxxxxxx> wrote: >> >> >> >> Andrii Nakryiko <andrii.nakryiko@xxxxxxxxx> writes: >> >> >> >> > On Wed, Apr 24, 2024 at 10:36 AM Puranjay Mohan <puranjay@xxxxxxxxxx> wrote: >> >> >> >> >> >> As ARM64 JIT now implements BPF_MOV64_PERCPU_REG instruction, inline >> >> >> bpf_get_smp_processor_id(). >> >> >> >> >> >> ARM64 uses the per-cpu variable cpu_number to store the cpu id. >> >> >> >> >> >> Here is how the BPF and ARM64 JITed assembly changes after this commit: >> >> >> >> >> >> BPF >> >> >> ===== >> >> >> BEFORE AFTER >> >> >> -------- ------- >> >> >> >> >> >> int cpu = bpf_get_smp_processor_id(); int cpu = bpf_get_smp_processor_id(); >> >> >> (85) call bpf_get_smp_processor_id#229032 (18) r0 = 0xffff800082072008 >> >> >> (bf) r0 = r0 >> >> > >> >> > nit: hmm, you are probably using a bit outdated bpftool, it should be >> >> > emitted as: >> >> > >> >> > (bf) r0 = &(void __percpu *)(r0) >> >> >> >> Yes, I was using the bpftool shipped with the distro. I tried it again >> >> with the latest bpftool and it emitted this as expected. >> > >> > Cool, would be nice to update the commit message with the right syntax >> > for next revision, thanks! >> > >> >> Sure, will do. >> >> >> >> >> > >> >> >> (61) r0 = *(u32 *)(r0 +0) >> >> >> >> >> >> ARM64 JIT >> >> >> =========== >> >> >> >> >> >> BEFORE AFTER >> >> >> -------- ------- >> >> >> >> >> >> int cpu = bpf_get_smp_processor_id(); int cpu = bpf_get_smp_processor_id(); >> >> >> mov x10, #0xfffffffffffff4d0 mov x7, #0xffff8000ffffffff >> >> >> movk x10, #0x802b, lsl #16 movk x7, #0x8207, lsl #16 >> >> >> movk x10, #0x8000, lsl #32 movk x7, #0x2008 >> >> >> blr x10 mrs x10, tpidr_el1 >> >> >> add x7, x0, #0x0 add x7, x7, x10 >> >> >> ldr w7, [x7] >> >> >> >> >> >> Performance improvement using benchmark[1] >> >> >> >> >> >> BEFORE AFTER >> >> >> -------- ------- >> >> >> >> >> >> glob-arr-inc : 23.817 ± 0.019M/s glob-arr-inc : 24.631 ± 0.027M/s >> >> >> arr-inc : 23.253 ± 0.019M/s arr-inc : 23.742 ± 0.023M/s >> >> >> hash-inc : 12.258 ± 0.010M/s hash-inc : 12.625 ± 0.004M/s >> >> >> >> >> >> [1] https://github.com/anakryiko/linux/commit/8dec900975ef >> >> >> >> >> >> Signed-off-by: Puranjay Mohan <puranjay@xxxxxxxxxx> >> >> >> --- >> >> >> kernel/bpf/verifier.c | 11 ++++++++++- >> >> >> 1 file changed, 10 insertions(+), 1 deletion(-) >> >> >> >> >> > >> >> > Besides the nits, lgtm. >> >> > >> >> > Acked-by: Andrii Nakryiko <andrii@xxxxxxxxxx> >> >> > >> >> >> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c >> >> >> index 9715c88cc025..3373be261889 100644 >> >> >> --- a/kernel/bpf/verifier.c >> >> >> +++ b/kernel/bpf/verifier.c >> >> >> @@ -20205,7 +20205,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env) >> >> >> goto next_insn; >> >> >> } >> >> >> >> >> >> -#ifdef CONFIG_X86_64 >> >> >> +#if defined(CONFIG_X86_64) || defined(CONFIG_ARM64) >> >> > >> >> > I think you can drop this, we are protected by >> >> > bpf_jit_supports_percpu_insn() check and newly added inner #if/#elif >> >> > checks? >> >> >> >> If I remove this and later add support of percpu_insn on RISCV without >> >> inlining bpf_get_smp_processor_id() then it will cause problems here >> >> right? because then the last 5-6 lines inside this if(){} will be >> >> executed for RISCV. >> > >> > Just add >> > >> > #else >> > return -EFAULT; >> >> I don't think we can return. > > ah, because it's not an error condition, right > >> >> > #endif >> > >> > ? >> > >> > I'm trying to avoid this duplication of the defined(CONFIG_xxx) checks >> > for supported architectures. >> >> Does the following look correct? >> >> I will do it like this: >> >> /* Implement bpf_get_smp_processor_id() inline. */ >> if (insn->imm == BPF_FUNC_get_smp_processor_id && >> prog->jit_requested && bpf_jit_supports_percpu_insn()) { >> /* BPF_FUNC_get_smp_processor_id inlining is an >> * optimization, so if pcpu_hot.cpu_number is ever >> * changed in some incompatible and hard to support >> * way, it's fine to back out this inlining logic >> */ >> #if defined(CONFIG_X86_64) >> insn_buf[0] = BPF_MOV32_IMM(BPF_REG_0, (u32)(unsigned long)&pcpu_hot.cpu_number); >> insn_buf[1] = BPF_MOV64_PERCPU_REG(BPF_REG_0, BPF_REG_0); >> insn_buf[2] = BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0); >> cnt = 3; >> #elif defined(CONFIG_ARM64) >> struct bpf_insn cpu_number_addr[2] = { BPF_LD_IMM64(BPF_REG_0, (u64)&cpu_number) }; >> >> insn_buf[0] = cpu_number_addr[0]; >> insn_buf[1] = cpu_number_addr[1]; >> insn_buf[2] = BPF_MOV64_PERCPU_REG(BPF_REG_0, BPF_REG_0); >> insn_buf[3] = BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0); >> cnt = 4; >> #else >> goto next_insn; >> #endif > > yep, I just wrote a large comment about goto next_insns above and then > saw you already proposed that :) Yep, I think this is the way. > >> new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt); >> if (!new_prog) >> return -ENOMEM; >> >> delta += cnt - 1; >> env->prog = prog = new_prog; >> insn = new_prog->insnsi + i + delta; >> goto next_insn; >> } >> >> >> >> >> >> > >> >> >> /* Implement bpf_get_smp_processor_id() inline. */ >> >> >> if (insn->imm == BPF_FUNC_get_smp_processor_id && >> >> >> prog->jit_requested && bpf_jit_supports_percpu_insn()) { >> >> >> @@ -20214,11 +20214,20 @@ static int do_misc_fixups(struct bpf_verifier_env *env) >> >> >> * changed in some incompatible and hard to support >> >> >> * way, it's fine to back out this inlining logic >> >> >> */ >> >> >> +#if defined(CONFIG_X86_64) >> >> >> insn_buf[0] = BPF_MOV32_IMM(BPF_REG_0, (u32)(unsigned long)&pcpu_hot.cpu_number); >> >> >> insn_buf[1] = BPF_MOV64_PERCPU_REG(BPF_REG_0, BPF_REG_0); >> >> >> insn_buf[2] = BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0); >> >> >> cnt = 3; >> >> >> +#elif defined(CONFIG_ARM64) >> >> >> + struct bpf_insn cpu_number_addr[2] = { BPF_LD_IMM64(BPF_REG_0, (u64)&cpu_number) }; >> >> >> >> >> > >> >> > this &cpu_number offset is not guaranteed to be within 4GB on arm64? >> >> >> >> Unfortunately, the per-cpu section is not placed in the first 4GB and >> >> therefore the per-cpu pointers are not 32-bit on ARM64. >> > >> > I see. It might make sense to turn x86-64 code into using MOV64_IMM as >> > well to keep more of the logic common. Then it will be just the >> > difference of an offset that's loaded. Give it a try? >> >> I think MOV64_IMM would have more overhead than MOV32_IMM and if we can >> use it in x86-64 we should keep doing it that way. Wdyt? > > My assumption (which I didn't check) was that BPF JITs should optimize > such MOV64_IMM that have a constant fitting within 32-bits with a > faster and smaller instruction. But I'm fine leaving it as is, of > course. You are right. I verified that the JITs will optimize this if the imm is 32-bit. So, I will make it common in the next version. Also, for the readers, we are discussing: 1) BPF_MOV32_IMM : This moves a 32 bit imm into a register and zero-extends it. 2) BPF_LD_IMM64 : This moves(loads) a 64 bit imm into a register. The JITs will optimize this to a BPF_MOV32_IMM, if the imm is 32-bit. Not to be confused with : 3) BPF_MOV64_IMM: This also works with a 32-bit imm but will sign extend it to 64-bit rather than zero-extend. Thanks, Puranjay