Re: [PATCH v3 5/8] tracing/fprobe: Enable fprobe events with CONFIG_DYNAMIC_FTRACE_WITH_ARGS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 17 Aug 2023 10:57:50 +0200
Florent Revest <revest@xxxxxxxxxxxx> wrote:

> On Sat, Aug 12, 2023 at 7:37 AM Masami Hiramatsu (Google)
> <mhiramat@xxxxxxxxxx> wrote:
> >
> > diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
> > index d56304276318..6fb4ecf8767d 100644
> > --- a/kernel/trace/Kconfig
> > +++ b/kernel/trace/Kconfig
> > @@ -679,7 +679,6 @@ config FPROBE_EVENTS
> >         select TRACING
> >         select PROBE_EVENTS
> >         select DYNAMIC_EVENTS
> > -       depends on DYNAMIC_FTRACE_WITH_REGS
> 
> I believe that, in practice, fprobe events still rely on WITH_REGS:
> 
> > diff --git a/kernel/trace/trace_fprobe.c b/kernel/trace/trace_fprobe.c
> > index f440c97e050f..94c01dc061ec 100644
> > --- a/kernel/trace/trace_fprobe.c
> > +++ b/kernel/trace/trace_fprobe.c
> > @@ -327,14 +328,15 @@ static int fentry_dispatcher(struct fprobe *fp, unsigned long entry_ip,
> >         struct pt_regs *regs = ftrace_get_regs(fregs);
> 
> Because here you require the entry handler needs ftrace_regs that are
> full pt_regs.

Ah, that is for perf events. Yes, that is the problematic point.
Since perf's interfaces are depending on the pt_regs (especially stacktrace)
I can not remove this part. This is the next issue to be solved.
Maybe we can use partial pt_regs for stack tracing, so we can swap the order
of the patches to introduce ftrace_partial_regs() before this and use it for
perf event.

> 
> >         int ret = 0;
> >
> > +       if (trace_probe_test_flag(&tf->tp, TP_FLAG_TRACE))
> > +               fentry_trace_func(tf, entry_ip, fregs);
> > +
> > +#ifdef CONFIG_PERF_EVENTS
> >         if (!regs)
> >                 return 0;
> >
> > -       if (trace_probe_test_flag(&tf->tp, TP_FLAG_TRACE))
> > -               fentry_trace_func(tf, entry_ip, regs);
> > -#ifdef CONFIG_PERF_EVENTS
> >         if (trace_probe_test_flag(&tf->tp, TP_FLAG_PROFILE))
> > -               ret = fentry_perf_func(tf, entry_ip, regs);
> > +               ret = fentry_perf_func(tf, entry_ip, fregs, regs);
> >  #endif
> >         return ret;
> >  }
> > @@ -347,14 +349,15 @@ static void fexit_dispatcher(struct fprobe *fp, unsigned long entry_ip,
> >         struct trace_fprobe *tf = container_of(fp, struct trace_fprobe, fp);
> >         struct pt_regs *regs = ftrace_get_regs(fregs);
> 
> And same here with the return handler
> 
> I think fprobe events would need the same sort of refactoring as
> kprobe_multi bpf: using ftrace_partial_regs so they work on build
> !WITH_REGS.

Actually, kprobe_multi is using fprobe directly, so this is not related
to bpf part.

Thank you,


-- 
Masami Hiramatsu (Google) <mhiramat@xxxxxxxxxx>




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux