Re: [PATCH bpf-next v3 1/2] arm64, bpf: add internal-only MOV instruction to resolve per-CPU addrs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Apr 26, 2024 at 9:55 AM Puranjay Mohan <puranjay@xxxxxxxxxx> wrote:
>
> Andrii Nakryiko <andrii.nakryiko@xxxxxxxxx> writes:
>
> > On Fri, Apr 26, 2024 at 5:14 AM Puranjay Mohan <puranjay@xxxxxxxxxx> wrote:
> >>
> >> From: Puranjay Mohan <puranjay12@xxxxxxxxx>
> >>
> >> Support an instruction for resolving absolute addresses of per-CPU
> >> data from their per-CPU offsets. This instruction is internal-only and
> >> users are not allowed to use them directly. They will only be used for
> >> internal inlining optimizations for now between BPF verifier and BPF
> >> JITs.
> >>
> >> Since commit 7158627686f0 ("arm64: percpu: implement optimised pcpu
> >> access using tpidr_el1"), the per-cpu offset for the CPU is stored in
> >> the tpidr_el1/2 register of that CPU.
> >>
> >> To support this BPF instruction in the ARM64 JIT, the following ARM64
> >> instructions are emitted:
> >>
> >> mov dst, src            // Move src to dst, if src != dst
> >> mrs tmp, tpidr_el1/2    // Move per-cpu offset of the current cpu in tmp.
> >> add dst, dst, tmp       // Add the per cpu offset to the dst.
> >>
> >> To measure the performance improvement provided by this change, the
> >> benchmark in [1] was used:
> >>
> >> Before:
> >> glob-arr-inc   :   23.597 ± 0.012M/s
> >> arr-inc        :   23.173 ± 0.019M/s
> >> hash-inc       :   12.186 ± 0.028M/s
> >>
> >> After:
> >> glob-arr-inc   :   23.819 ± 0.034M/s
> >> arr-inc        :   23.285 ± 0.017M/s
> >
> > I still expected a better improvement (global-arr-inc's results
> > improved more than arr-inc, which is completely different from
> > x86-64), but it's still a good thing to support this for arm64, of
> > course.
> >
> > ack for generic parts I can understand:
> >
> > Acked-by: Andrii Nakryiko <andrii@xxxxxxxxxx>
> >
>
> I will have to do more research to find why we don't see very high
> improvement.
>
> But this is what is happening here:
>
> This was the complete picture before inlining:
>
> int cpu = bpf_get_smp_processor_id();
> mov     x10, #0xffffffffffffd4a8
> movk    x10, #0x802c, lsl #16
> movk    x10, #0x8000, lsl #32
> blr     x10 ---------------------------------------> nop
>                                                      nop
>                                                      adrp    x0, 0xffff800082128000
>                                                      mrs     x1, tpidr_el1
>                                                      add     x0, x0, #0x8
>                                                      ldrsw   x0, [x0, x1]
>             <----------------------------------------ret
> add     x7, x0, #0x0
>
>
> Now we have:
>
> int cpu = bpf_get_smp_processor_id();
> mov     x7, #0xffff8000ffffffff
> movk    x7, #0x8212, lsl #16
> movk    x7, #0x8008
> mrs     x10, tpidr_el1
> add     x7, x7, x10
> ldr     w7, [x7]
>
>
> So, we have removed multiple instructions including a branch and a
> return. I was expecting to see more improvement. This benchmark is taken
> from a KVM based virtual machine, maybe if I do it on bare-metal I would
> see more improvement ?

I see, yeah, I think it might change significantly. I remember back
from times when I was benchmarking BPF ringbuf, I was getting
very-very different results from inside QEMU vs bare metal. And I
don't mean just in absolute numbers. QEMU/KVM seems to change a lot of
things when it comes to contentions, atomic instructions, etc, etc.
Anyways, for benchmarking, always try to do bare metal.

>
> Thanks,
> Puranjay





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux