Re: [PATCH bpf-next 1/3] bpf: Add support for bpf_get_func_ip helper for uprobe program

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 8/1/23 12:44 PM, Yonghong Song wrote:


On 8/1/23 4:53 AM, Yafang Shao wrote:
On Tue, Aug 1, 2023 at 3:30 PM Jiri Olsa <jolsa@xxxxxxxxxx> wrote:

Adding support for bpf_get_func_ip helper for uprobe program to return
probed address for both uprobe and return uprobe.

We discussed this in [1] and agreed that uprobe can have special use
of bpf_get_func_ip helper that differs from kprobe.

The kprobe bpf_get_func_ip returns:
   - address of the function if probe is attach on function entry
     for both kprobe and return kprobe
   - 0 if the probe is not attach on function entry

The uprobe bpf_get_func_ip returns:
   - address of the probe for both uprobe and return uprobe

The reason for this semantic change is that kernel can't really tell
if the probe user space address is function entry.

The uprobe program is actually kprobe type program attached as uprobe.
One of the consequences of this design is that uprobes do not have its
own set of helpers, but share them with kprobes.

As we need different functionality for bpf_get_func_ip helper for uprobe,
I'm adding the bool value to the bpf_trace_run_ctx, so the helper can
detect that it's executed in uprobe context and call specific code.

The is_uprobe bool is set as true in bpf_prog_run_array_sleepable which
is currently used only for executing bpf programs in uprobe.

That is error-prone.  If we don't intend to rename
bpf_prog_run_array_sleepable() to bpf_prog_run_array_uprobe(), I think
we'd better introduce a new parameter 'bool is_uprobe' into it.

Agree that renaming bpf_prog_run_array_sleepable() to
bpf_prog_run_array_uprobe() probably better. This way, it is
self-explainable for `run_ctx.is_uprobe = true`.

If unlikely case in the future, another sleepable run prog array
is needed. They can have their own bpf_prog_run_array_<..>
and underlying bpf_prog_run_array_sleepable() can be factored out.

Or if want to avoid unnecessary code churn, at least add
a comment in bpf_prog_run_array_sleepable() to explain
that why it is safe to do `run_ctx.is_uprobe = true;`.




Suggested-by: Andrii Nakryiko <andrii@xxxxxxxxxx>
[1] https://lore.kernel.org/bpf/CAEf4BzZ=xLVkG5eurEuvLU79wAMtwho7ReR+XJAgwhFF4M-7Cg@xxxxxxxxxxxxxx/
Signed-off-by: Jiri Olsa <jolsa@xxxxxxxxxx>
---
  include/linux/bpf.h            |  5 +++++
  include/uapi/linux/bpf.h       |  7 ++++++-
  kernel/trace/bpf_trace.c       | 21 ++++++++++++++++++++-
  kernel/trace/trace_probe.h     |  5 +++++
  kernel/trace/trace_uprobe.c    |  5 -----
  tools/include/uapi/linux/bpf.h |  7 ++++++-
  6 files changed, 42 insertions(+), 8 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index ceaa8c23287f..8ea071383ef1 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1819,6 +1819,7 @@ struct bpf_cg_run_ctx {
  struct bpf_trace_run_ctx {
         struct bpf_run_ctx run_ctx;
         u64 bpf_cookie;
+       bool is_uprobe;
  };

  struct bpf_tramp_run_ctx {
@@ -1867,6 +1868,8 @@ bpf_prog_run_array(const struct bpf_prog_array *array,
         if (unlikely(!array))
                 return ret;

+       run_ctx.is_uprobe = false;
+
         migrate_disable();
         old_run_ctx = bpf_set_run_ctx(&run_ctx.run_ctx);
         item = &array->items[0];
@@ -1906,6 +1909,8 @@ bpf_prog_run_array_sleepable(const struct bpf_prog_array __rcu *array_rcu,
         rcu_read_lock_trace();
         migrate_disable();

+       run_ctx.is_uprobe = true;
+
         array = rcu_dereference_check(array_rcu, rcu_read_lock_trace_held());
         if (unlikely(!array))
                 goto out;
[...]





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux