Re: [RFC PATCH bpf-next 5/6] bpf: Make BPF JIT support installation of bpf runtime hooks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2025/2/14 00:26, Juntong Deng wrote:
This patch makes BPF JIT support installation of bpf runtime hooks.

The principle of bpf runtime hook is simple, by replacing the memory
address of kfunc in the CALL instruction with the memory address of
the hook function, and inserting the memory address of kfunc as the
6th argument.

select_bpf_runtime_hook is used to select the runtime hook to be
installed, based on kfunc. If it is acquiring kfunc, install
bpf_runtime_acquire_hook, if it is releasing kfunc, install
bpf_runtime_release_hook. Maybe in the future we can use this
to install watchdog hooks.

In the hook function, we can read the arguments passed to the original
kfunc. Normally, we will call the original kfunc with the same arguments
in the hook function, and return the return value returned by the
original kfunc.

After the BPF JIT, the function calling convention of the bpf program
will be the same as the calling convention of the native architecture
(regardless of the architecture), so this approach will always work.

Since this is only for demonstration purposes, only support for the
x86_64 architecture is implemented.

This approach is easily portable to support other architectures,
the only thing we need to do is replace the call address and insert
a argument.

Signed-off-by: Juntong Deng <juntong.deng@xxxxxxxxxxx>
---
  arch/x86/net/bpf_jit_comp.c |  8 ++++++++
  include/linux/btf.h         |  1 +
  kernel/bpf/btf.c            | 39 +++++++++++++++++++++++++++++++++++++
  3 files changed, 48 insertions(+)

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index a43fc5af973d..da579e835731 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -2184,6 +2184,7 @@ st:			if (is_imm8(insn->off))
  			/* call */
  		case BPF_JMP | BPF_CALL: {
  			u8 *ip = image + addrs[i - 1];
+			void *runtime_hook;
func = (u8 *) __bpf_call_base + imm32;
  			if (src_reg == BPF_PSEUDO_CALL && tail_call_reachable) {
@@ -2197,6 +2198,13 @@ st:			if (is_imm8(insn->off))
  				ip += 2;
  			}
  			ip += x86_call_depth_emit_accounting(&prog, func, ip);
+			runtime_hook = select_bpf_runtime_hook(func);
+			if (runtime_hook) {
+				emit_mov_imm64(&prog, X86_REG_R9, (long)func >> 32,
+					       (u32) (long)func);
+				ip += 6;
+				func = (u8 *)runtime_hook;
+			}
  			if (emit_call(&prog, func, ip))
  				return -EINVAL;
  			if (priv_frame_ptr)
diff --git a/include/linux/btf.h b/include/linux/btf.h
index 39f12d101809..46681181e2bc 100644
--- a/include/linux/btf.h
+++ b/include/linux/btf.h
@@ -571,6 +571,7 @@ void *bpf_runtime_acquire_hook(void *arg1, void *arg2, void *arg3,
  			       void *arg4, void *arg5, void *arg6);
  void bpf_runtime_release_hook(void *arg1, void *arg2, void *arg3,
  			      void *arg4, void *arg5, void *arg6);
+void *select_bpf_runtime_hook(void *kfunc);
  const struct btf_type *btf_type_by_id(const struct btf *btf, u32 type_id);
  void btf_set_base_btf(struct btf *btf, const struct btf *base_btf);
  int btf_relocate(struct btf *btf, const struct btf *base_btf, __u32 **map_ids);
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 93ca804d52e3..f99b9f746674 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -9640,3 +9640,42 @@ void bpf_runtime_release_hook(void *arg1, void *arg2, void *arg3,
print_bpf_active_refs();
  }
+
+void *select_bpf_runtime_hook(void *kfunc)
+{
+	struct btf_struct_kfunc *struct_kfunc, dummy_key;
+	struct btf_struct_kfunc_tab *tab;
+	struct btf *btf;
+
+	btf = bpf_get_btf_vmlinux();
+	dummy_key.kfunc_addr = (unsigned long)kfunc;
+
+	tab = btf->acquire_kfunc_tab;
+	if (tab) {
+		struct_kfunc = bsearch(&dummy_key, tab->set, tab->cnt,
+				       sizeof(struct btf_struct_kfunc),
+				       btf_kfunc_addr_cmp_func);
+		if (struct_kfunc)
+			return bpf_runtime_acquire_hook;
+	}
+
+	tab = btf->release_kfunc_tab;
+	if (tab) {
+		struct_kfunc = bsearch(&dummy_key, tab->set, tab->cnt,
+				       sizeof(struct btf_struct_kfunc),
+				       btf_kfunc_addr_cmp_func);
+		if (struct_kfunc)
+			return bpf_runtime_release_hook;
+	}
+
+	/*
+	 * For watchdog we may need
+	 *
+	 * tab = btf->may_run_repeatedly_long_time_kfunc_tab
+	 * struct_kfunc = bsearch(&dummy_key, tab->set, tab->cnt, sizeof(struct btf_struct_kfunc),
+	 *		       btf_kfunc_addr_cmp_func);
+	 * if (struct_kfunc)
+	 *	return bpf_runtime_watchdog_hook;
+	 */
+	return NULL;
+}

This weekend I realized that BPF runtime hooks can have more application scenarios, for example, it can help us debug/diagnose bpf programs.

A proof-of-concept implementation [0].

[0]: https://lore.kernel.org/bpf/AM6PR03MB50804A5BF211E94A5DF8F66699FB2@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/T/#u





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux