Re: [RFC bpf-next] bpf: Use prog->active instead of bpf_prog_active for kprobe_multi

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 5/25/22 4:40 AM, Jiri Olsa wrote:
hi,
Alexei suggested to use prog->active instead global bpf_prog_active
for programs attached with kprobe multi [1].

prog->active and bpf_prog_active tries to prevent program
recursion and bpf_prog_active provides stronger protection
as it prevent different programs from recursion while prog->active
presents only for the same program.

Currently trampoline based programs use prog->active mechanism
and kprobe, tracepoint and perf.


AFAICS this will bypass bpf_disable_instrumentation, which seems to be
ok for some places like hash map update, but I'm not sure about other
places, hence this is RFC post.

I'm not sure how are kprobes different to trampolines in this regard,
because trampolines use prog->active and it's not a problem.

The following is just my understanding.
In most cases, prog->active should be okay. The only tricky
case might be due to shared maps such that one prog did update/delete
map element and inside the lock in update/delete another
trampoline program is triggered and trying to update/delete the same
map (bucket). But this is a known issue and not a unique issue for
kprobe_multi.


thoughts?

thanks,
jirka


[1] https://lore.kernel.org/bpf/20220316185333.ytyh5irdftjcklk6@xxxxxxxxxxxxxxxxxxxxxxxxxxxx/
---
  kernel/trace/bpf_trace.c | 31 +++++++++++++++++++------------
  1 file changed, 19 insertions(+), 12 deletions(-)

diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 10b157a6d73e..7aec39ae0a1c 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -2385,8 +2385,8 @@ static u64 bpf_kprobe_multi_entry_ip(struct bpf_run_ctx *ctx)
  }
static int
-kprobe_multi_link_prog_run(struct bpf_kprobe_multi_link *link,
-			   unsigned long entry_ip, struct pt_regs *regs)
+__kprobe_multi_link_prog_run(struct bpf_kprobe_multi_link *link,
+			     unsigned long entry_ip, struct pt_regs *regs)
  {
  	struct bpf_kprobe_multi_run_ctx run_ctx = {
  		.link = link,
@@ -2395,21 +2395,28 @@ kprobe_multi_link_prog_run(struct bpf_kprobe_multi_link *link,
  	struct bpf_run_ctx *old_run_ctx;
  	int err;
- if (unlikely(__this_cpu_inc_return(bpf_prog_active) != 1)) {
-		err = 0;
-		goto out;
-	}
-
-	migrate_disable();
-	rcu_read_lock();
  	old_run_ctx = bpf_set_run_ctx(&run_ctx.run_ctx);
  	err = bpf_prog_run(link->link.prog, regs);
  	bpf_reset_run_ctx(old_run_ctx);
+	return err;
+}
+
+static int
+kprobe_multi_link_prog_run(struct bpf_kprobe_multi_link *link,
+			   unsigned long entry_ip, struct pt_regs *regs)
+{
+	struct bpf_prog *prog = link->link.prog;
+	int err = 0;
+
+	migrate_disable();
+	rcu_read_lock();
+
+	if (likely(__this_cpu_inc_return(*(prog->active)) == 1))
+		err = __kprobe_multi_link_prog_run(link, entry_ip, regs);
+
+	__this_cpu_dec(*(prog->active));
  	rcu_read_unlock();
  	migrate_enable();
-
- out:
-	__this_cpu_dec(bpf_prog_active);
  	return err;
  }



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux