Re: [PATCH bpf-next] bpf: avoid get_kernel_nofault() to fetch kprobe entry IP

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Mar 20, 2024 at 1:34 AM Jiri Olsa <olsajiri@xxxxxxxxx> wrote:
>
> On Wed, Mar 20, 2024 at 12:47:42PM +0900, Masami Hiramatsu wrote:
> > On Tue, 19 Mar 2024 14:20:13 -0700
> > Andrii Nakryiko <andrii@xxxxxxxxxx> wrote:
> >
> > > get_kernel_nofault() (or, rather, underlying copy_from_kernel_nofault())
> > > is not free and it does pop up in performance profiles when
> > > kprobes are heavily utilized with CONFIG_X86_KERNEL_IBT=y config.
> > >
> > > Let's avoid using it if we know that fentry_ip - 4 can't cross page
> > > boundary. We do that by masking lowest 12 bits and checking if they are
> > > >= 4, in which case we can do direct memory read.
> > >
> > > Another benefit (and actually what caused a closer look at this part of
> > > code) is that now LBR record is (typically) not wasted on
> > > copy_from_kernel_nofault() call and code, which helps tools like
> > > retsnoop that grab LBR records from inside BPF code in kretprobes.
>
> I think this is nice improvement
>
> Acked-by: Jiri Olsa <jolsa@xxxxxxxxxx>
>

Masami, are you ok if we land this rather straightforward fix in
bpf-next tree for now, and then you or someone a bit more familiar
with ftrace/kprobe internals can generalize this in a more generic
way?

> >
> > Hmm, we may better to have this function in kprobe side and
> > store a flag which such architecture dependent offset is added.
> > That is more natural.
>
> I like the idea of new flag saying the address was adjusted for endbr
>

instead of a flag, can kprobe low-level infrastructure just provide
"effective fentry ip" without any flags, so that BPF side of things
don't have to care?

> kprobe adjust the address in arch_adjust_kprobe_addr, it could be
> easily added in there and then we'd adjust the address in get_entry_ip
> accordingly
>
> jirka
>
> >
> > Thanks!
> >
> > >
> > > Cc: Masami Hiramatsu <mhiramat@xxxxxxxxxx>
> > > Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
> > > Signed-off-by: Andrii Nakryiko <andrii@xxxxxxxxxx>
> > > ---
> > >  kernel/trace/bpf_trace.c | 12 +++++++++---
> > >  1 file changed, 9 insertions(+), 3 deletions(-)
> > >
> > > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> > > index 0a5c4efc73c3..f81adabda38c 100644
> > > --- a/kernel/trace/bpf_trace.c
> > > +++ b/kernel/trace/bpf_trace.c
> > > @@ -1053,9 +1053,15 @@ static unsigned long get_entry_ip(unsigned long fentry_ip)
> > >  {
> > >     u32 instr;
> > >
> > > -   /* Being extra safe in here in case entry ip is on the page-edge. */
> > > -   if (get_kernel_nofault(instr, (u32 *) fentry_ip - 1))
> > > -           return fentry_ip;
> > > +   /* We want to be extra safe in case entry ip is on the page edge,
> > > +    * but otherwise we need to avoid get_kernel_nofault()'s overhead.
> > > +    */
> > > +   if ((fentry_ip & ~PAGE_MASK) < ENDBR_INSN_SIZE) {
> > > +           if (get_kernel_nofault(instr, (u32 *)(fentry_ip - ENDBR_INSN_SIZE)))
> > > +                   return fentry_ip;
> > > +   } else {
> > > +           instr = *(u32 *)(fentry_ip - ENDBR_INSN_SIZE);
> > > +   }
> > >     if (is_endbr(instr))
> > >             fentry_ip -= ENDBR_INSN_SIZE;
> > >     return fentry_ip;
> > > --
> > > 2.43.0
> > >
> >
> >
> > --
> > Masami Hiramatsu (Google) <mhiramat@xxxxxxxxxx>
> >





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux