On Thu, Jul 01, 2021 at 08:58:54AM +0900, Masami Hiramatsu wrote: SNIP > > > return &bpf_override_return_proto; > > > #endif > > > + case BPF_FUNC_get_func_ip: > > > + return &bpf_get_func_ip_proto_kprobe; > > > default: > > > return bpf_tracing_func_proto(func_id, prog); > > > } > > > diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c > > > index ea6178cb5e33..b07d5888db14 100644 > > > --- a/kernel/trace/trace_kprobe.c > > > +++ b/kernel/trace/trace_kprobe.c > > > @@ -1570,6 +1570,18 @@ static int kretprobe_event_define_fields(struct trace_event_call *event_call) > > > } > > > > > > #ifdef CONFIG_PERF_EVENTS > > > +/* Used by bpf get_func_ip helper */ > > > +DEFINE_PER_CPU(u64, current_kprobe_addr) = 0; > > > > Didn't check other architectures. But this should work > > for x86 where if nested kprobe happens, the second > > kprobe will not call kprobe handlers. > > No problem, other architecture also does not call nested kprobes handlers. > However, you don't need this because you can use kprobe_running() > in kprobe context. > > kp = kprobe_running(); > if (kp) > return kp->addr; great, that's easier > > BTW, I'm not sure why don't you use instruction_pointer(regs)? I tried that but it returns function address + 1, and I thought that could be different on each arch and we'd need arch specific code to deal with that thanks, jirka