Andrii Nakryiko <andrii.nakryiko@xxxxxxxxx> writes: > On Fri, Apr 5, 2024 at 5:44 AM Puranjay Mohan <puranjay12@xxxxxxxxx> wrote: >> >> Support an instruction for resolving absolute addresses of per-CPU >> data from their per-CPU offsets. This instruction is internal-only and >> users are not allowed to use them directly. They will only be used for >> internal inlining optimizations for now between BPF verifier and BPF >> JITs. >> >> RISC-V uses generic per-cpu implementation where the offsets for CPUs >> are kept in an array called __per_cpu_offset[cpu_number]. RISCV stores >> the address of the task_struct in TP register. The first element in >> tast_struct is struct thread_info, and we can get the cpu number by >> reading from the TP register + offsetof(struct thread_info, cpu). >> >> Once we have the cpu number in a register we read the offset for that >> cpu from address: &__per_cpu_offset + cpu_number << 3. Then we add this >> offset to the destination register. >> >> To measure the improvement from this change, the benchmark in [1] was >> used on Qemu: >> >> Before: >> glob-arr-inc : 1.127 ± 0.013M/s >> arr-inc : 1.121 ± 0.004M/s >> hash-inc : 0.681 ± 0.052M/s >> >> After: >> glob-arr-inc : 1.138 ± 0.011M/s >> arr-inc : 1.366 ± 0.006M/s >> hash-inc : 0.676 ± 0.001M/s > > TBH, I don't trust benchmarks done inside QEMU. Can you try running > this on some real hardware? I just ran it on a "VisionFive2" SBC: BEFORE ====== glob-arr-inc : 11.586 ± 0.021M/s arr-inc : 10.892 ± 0.005M/s hash-inc : 1.517 ± 0.001M/s AFTER ===== glob-arr-inc : 11.893 ± 0.017M/s (+2.6%) arr-inc : 11.630 ± 0.020M/s (+6.8%) hash-inc : 1.543 ± 0.002M/s (+1.7%) (It's early, and the coffee haven't kicked in, so I hope the calculations are correct...) >> >> [1] https://github.com/anakryiko/linux/commit/8dec900975ef >> >> Signed-off-by: Puranjay Mohan <puranjay12@xxxxxxxxx> >> --- >> arch/riscv/net/bpf_jit_comp64.c | 24 ++++++++++++++++++++++++ >> 1 file changed, 24 insertions(+) >> >> diff --git a/arch/riscv/net/bpf_jit_comp64.c b/arch/riscv/net/bpf_jit_comp64.c >> index 15e482f2c657..e95bd1d459a4 100644 >> --- a/arch/riscv/net/bpf_jit_comp64.c >> +++ b/arch/riscv/net/bpf_jit_comp64.c >> @@ -12,6 +12,7 @@ >> #include <linux/stop_machine.h> >> #include <asm/patch.h> >> #include <asm/cfi.h> >> +#include <asm/percpu.h> >> #include "bpf_jit.h" >> >> #define RV_FENTRY_NINSNS 2 >> @@ -1089,6 +1090,24 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, >> emit_or(RV_REG_T1, rd, RV_REG_T1, ctx); >> emit_mv(rd, RV_REG_T1, ctx); >> break; >> + } else if (insn_is_mov_percpu_addr(insn)) { >> + if (rd != rs) >> + emit_mv(rd, rs, ctx); > > Is this an unconditional move instruction? in x86-64, EMIT_mov checks > whether source and destination registers are the same and doesn't emit > anything if they match (which makes sense, right)? Yeah, it is. Folding the check into the emit sounds like a good idea. Björn