Re: [PATCH bpf-next v2 2/4] bpf, x86: Create bpf_trace_run_ctx on the caller thread's stack

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Mar 15, 2022 at 5:44 PM Kui-Feng Lee <kuifeng@xxxxxx> wrote:
>
> BPF trampolines will create a bpf_trace_run_ctx on their stacks, and
> set/reset the current bpf_run_ctx whenever calling/returning from a
> bpf_prog.
>
> Signed-off-by: Kui-Feng Lee <kuifeng@xxxxxx>
> ---
>  arch/x86/net/bpf_jit_comp.c | 32 ++++++++++++++++++++++++++++++++
>  include/linux/bpf.h         | 12 ++++++++----
>  kernel/bpf/syscall.c        |  4 ++--
>  kernel/bpf/trampoline.c     | 21 +++++++++++++++++----
>  4 files changed, 59 insertions(+), 10 deletions(-)
>

[...]

> diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
> index 54c695d49ec9..0b050aa2f159 100644
> --- a/kernel/bpf/trampoline.c
> +++ b/kernel/bpf/trampoline.c
> @@ -580,9 +580,12 @@ static void notrace inc_misses_counter(struct bpf_prog *prog)
>   * [2..MAX_U64] - execute bpf prog and record execution time.
>   *     This is start time.
>   */
> -u64 notrace __bpf_prog_enter(struct bpf_prog *prog)
> +u64 notrace __bpf_prog_enter(struct bpf_prog *prog, struct bpf_trace_run_ctx *run_ctx)
>         __acquires(RCU)
>  {
> +       if (run_ctx)
> +               run_ctx->saved_run_ctx = bpf_set_run_ctx(&run_ctx->run_ctx);
> +

In all current cases we bpf_set_run_ctx() after migrate_disable and
rcu_read_lock, let's keep this consistent (even if I don't remember if
that order matters or not).

>         rcu_read_lock();
>         migrate_disable();
>         if (unlikely(__this_cpu_inc_return(*(prog->active)) != 1)) {
> @@ -614,17 +617,23 @@ static void notrace update_prog_stats(struct bpf_prog *prog,
>         }
>  }
>
> -void notrace __bpf_prog_exit(struct bpf_prog *prog, u64 start)
> +void notrace __bpf_prog_exit(struct bpf_prog *prog, u64 start, struct bpf_trace_run_ctx *run_ctx)
>         __releases(RCU)
>  {
> +       if (run_ctx)
> +               bpf_reset_run_ctx(run_ctx->saved_run_ctx);
> +
>         update_prog_stats(prog, start);
>         __this_cpu_dec(*(prog->active));
>         migrate_enable();
>         rcu_read_unlock();
>  }
>
> -u64 notrace __bpf_prog_enter_sleepable(struct bpf_prog *prog)
> +u64 notrace __bpf_prog_enter_sleepable(struct bpf_prog *prog, struct bpf_trace_run_ctx *run_ctx)
>  {
> +       if (run_ctx)
> +               run_ctx->saved_run_ctx = bpf_set_run_ctx(&run_ctx->run_ctx);
> +
>         rcu_read_lock_trace();
>         migrate_disable();
>         might_fault();
> @@ -635,8 +644,12 @@ u64 notrace __bpf_prog_enter_sleepable(struct bpf_prog *prog)
>         return bpf_prog_start_time();
>  }
>
> -void notrace __bpf_prog_exit_sleepable(struct bpf_prog *prog, u64 start)
> +void notrace __bpf_prog_exit_sleepable(struct bpf_prog *prog, u64 start,
> +                                      struct bpf_trace_run_ctx *run_ctx)

now that we have entire run_ctx, can we move `start` into run_ctx and
simplify __bpf_prog_enter/exit calls a bit? Or extra indirection will
hurt performance and won't be compensated by simpler enter/exit
calling convention?

>  {
> +       if (run_ctx)
> +               bpf_reset_run_ctx(run_ctx->saved_run_ctx);
> +
>         update_prog_stats(prog, start);
>         __this_cpu_dec(*(prog->active));
>         migrate_enable();
> --
> 2.30.2
>



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux