On 12/20/2022 10:13 AM, Pu Lehui wrote:
From: Pu Lehui <pulehui@xxxxxxxxxx>
The current bpf trampoline attach to kernel functions via ftrace direct
call API, while text_poke is applied for bpf2bpf attach and tail call
optimization. For architectures that do not support ftrace direct call,
text_poke is still able to attach bpf trampoline to kernel functions.
Let's relax it by rollback to text_poke when architecture not supported.
Signed-off-by: Pu Lehui <pulehui@xxxxxxxxxx>
---
kernel/bpf/trampoline.c | 8 ++------
1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index d6395215b849..386197a7952c 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -228,15 +228,11 @@ static int modify_fentry(struct bpf_trampoline *tr, void *old_addr, void *new_ad
static int register_fentry(struct bpf_trampoline *tr, void *new_addr)
{
void *ip = tr->func.addr;
- unsigned long faddr;
int ret;
- faddr = ftrace_location((unsigned long)ip);
- if (faddr) {
- if (!tr->fops)
- return -ENOTSUPP;
+ if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS) &&
+ !!ftrace_location((unsigned long)ip))
tr->func.ftrace_managed = true;
- }
After this patch, a kernel function with true trace_location will be patched
by bpf when CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS is disabled, which means
that a kernel function may be patched by both bpf and ftrace in a mutually
unaware way. This will cause ftrace and bpf_arch_text_poke to fail in a
somewhat random way if the function to be patched was already patched
by the other.
if (bpf_trampoline_module_get(tr))
return -ENOENT;