Re: [PATCH v3 bpf-next 2/2] bpf: inline bpf_get_branch_snapshot() helper

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 4/4/24 11:14 AM, Alexei Starovoitov wrote:
On Wed, Apr 3, 2024 at 5:27 PM Andrii Nakryiko <andrii@xxxxxxxxxx> wrote:
Inline bpf_get_branch_snapshot() helper using architecture-agnostic
inline BPF code which calls directly into underlying callback of
perf_snapshot_branch_stack static call. This callback is set early
during kernel initialization and is never updated or reset, so it's ok
to fetch actual implementation using static_call_query() and call
directly into it.

This change eliminates a full function call and saves one LBR entry
in PERF_SAMPLE_BRANCH_ANY LBR mode.

Acked-by: John Fastabend <john.fastabend@xxxxxxxxx>
Signed-off-by: Andrii Nakryiko <andrii@xxxxxxxxxx>
---
  kernel/bpf/verifier.c | 55 +++++++++++++++++++++++++++++++++++++++++++
  1 file changed, 55 insertions(+)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 17c06f1505e4..2cb5db317a5e 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -20181,6 +20181,61 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
                         goto next_insn;
                 }

+               /* Implement bpf_get_branch_snapshot inline. */
+               if (prog->jit_requested && BITS_PER_LONG == 64 &&
+                   insn->imm == BPF_FUNC_get_branch_snapshot) {
+                       /* We are dealing with the following func protos:
+                        * u64 bpf_get_branch_snapshot(void *buf, u32 size, u64 flags);
+                        * int perf_snapshot_branch_stack(struct perf_branch_entry *entries, u32 cnt);
+                        */
+                       const u32 br_entry_size = sizeof(struct perf_branch_entry);
+
+                       /* struct perf_branch_entry is part of UAPI and is
+                        * used as an array element, so extremely unlikely to
+                        * ever grow or shrink
+                        */
+                       BUILD_BUG_ON(br_entry_size != 24);
+
+                       /* if (unlikely(flags)) return -EINVAL */
+                       insn_buf[0] = BPF_JMP_IMM(BPF_JNE, BPF_REG_3, 0, 7);
+
+                       /* Transform size (bytes) into number of entries (cnt = size / 24).
+                        * But to avoid expensive division instruction, we implement
+                        * divide-by-3 through multiplication, followed by further
+                        * division by 8 through 3-bit right shift.
+                        * Refer to book "Hacker's Delight, 2nd ed." by Henry S. Warren, Jr.,
+                        * p. 227, chapter "Unsigned Divison by 3" for details and proofs.
+                        *
+                        * N / 3 <=> M * N / 2^33, where M = (2^33 + 1) / 3 = 0xaaaaaaab.
+                        */
+                       insn_buf[1] = BPF_MOV32_IMM(BPF_REG_0, 0xaaaaaaab);
+                       insn_buf[2] = BPF_ALU64_REG(BPF_MUL, BPF_REG_2, BPF_REG_0);
+                       insn_buf[3] = BPF_ALU64_IMM(BPF_RSH, BPF_REG_2, 36);
+
+                       /* call perf_snapshot_branch_stack implementation */
+                       insn_buf[4] = BPF_EMIT_CALL(static_call_query(perf_snapshot_branch_stack));
How will this work on non-x86 ?
I tried to grep the code and looks like only x86 does:
static_call_update(perf_snapshot_branch_stack,...)

so on other arch-s static_call_query() will return zero/einval?
And above will crash?

Patch 1 will give the answer.In events/core.c, we have the following: DEFINE_STATIC_CALL_RET0(perf_snapshot_branch_stack, perf_snapshot_branch_stack_t); #define DEFINE_STATIC_CALL_RET0(name, _func) \ DECLARE_STATIC_CALL(name, _func); \ struct static_call_key STATIC_CALL_KEY(name) = { \ .func = __static_call_return0, \ .type = 1, \ }; \ ARCH_DEFINE_STATIC_CALL_RET0_TRAMP(name) So the default value for perf_snapshot_branch_stack is
__static_call_return0.

In static_call.c, long __static_call_return0(void) { return 0; } EXPORT_SYMBOL_GPL(__static_call_return0); So we should be fine.





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux