Re: [PATCH v5 2/9] tracing/probes: Add fprobe events for tracing function entry and exit.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Apr 20, 2023 at 4:41 PM Masami Hiramatsu <mhiramat@xxxxxxxxxx> wrote:
>
> On Thu, 20 Apr 2023 11:49:32 -0700
> Alexei Starovoitov <alexei.starovoitov@xxxxxxxxx> wrote:
>
> > On Thu, Apr 20, 2023 at 08:25:50PM +0900, Masami Hiramatsu (Google) wrote:
> > > +static int fentry_perf_func(struct trace_fprobe *tf, unsigned long entry_ip,
> > > +                       struct pt_regs *regs)
> > > +{
> > > +   struct trace_event_call *call = trace_probe_event_call(&tf->tp);
> > > +   struct fentry_trace_entry_head *entry;
> > > +   struct hlist_head *head;
> > > +   int size, __size, dsize;
> > > +   int rctx;
> > > +
> > > +   if (bpf_prog_array_valid(call)) {
> > > +           unsigned long orig_ip = instruction_pointer(regs);
> > > +           int ret;
> > > +
> > > +           ret = trace_call_bpf(call, regs);
> >
> > Please do not call bpf from fprobe.
> > There is no use case for it.
>
> OK.
>
> >
> > > +
> > > +           /*
> > > +            * We need to check and see if we modified the pc of the
> > > +            * pt_regs, and if so return 1 so that we don't do the
> > > +            * single stepping.
> > > +            */
> > > +           if (orig_ip != instruction_pointer(regs))
> > > +                   return 1;
> > > +           if (!ret)
> > > +                   return 0;
> > > +   }
> > > +
> > > +   head = this_cpu_ptr(call->perf_events);
> > > +   if (hlist_empty(head))
> > > +           return 0;
> > > +
> > > +   dsize = __get_data_size(&tf->tp, regs);
> > > +   __size = sizeof(*entry) + tf->tp.size + dsize;
> > > +   size = ALIGN(__size + sizeof(u32), sizeof(u64));
> > > +   size -= sizeof(u32);
> > > +
> > > +   entry = perf_trace_buf_alloc(size, NULL, &rctx);
> > > +   if (!entry)
> > > +           return 0;
> > > +
> > > +   entry->ip = entry_ip;
> > > +   memset(&entry[1], 0, dsize);
> > > +   store_trace_args(&entry[1], &tf->tp, regs, sizeof(*entry), dsize);
> > > +   perf_trace_buf_submit(entry, size, rctx, call->event.type, 1, regs,
> > > +                         head, NULL);
> > > +   return 0;
> > > +}
> > > +NOKPROBE_SYMBOL(fentry_perf_func);
> > > +
> > > +static void
> > > +fexit_perf_func(struct trace_fprobe *tf, unsigned long entry_ip,
> > > +           unsigned long ret_ip, struct pt_regs *regs)
> > > +{
> > > +   struct trace_event_call *call = trace_probe_event_call(&tf->tp);
> > > +   struct fexit_trace_entry_head *entry;
> > > +   struct hlist_head *head;
> > > +   int size, __size, dsize;
> > > +   int rctx;
> > > +
> > > +   if (bpf_prog_array_valid(call) && !trace_call_bpf(call, regs))
> > > +           return;
> >
> > Same here.
> > These two parts look like copy-paste from kprobes.
> > I suspect this code wasn't tested at all.
>
> OK, I missed to test that bpf part. I thought bpf could be appended to
> any "trace-event" (looks like trace-event), isn't it?

No. We're not applying bpf filtering to any random event
that gets introduced in a tracing subsystem.
fprobe falls into that category.
Every hook where bpf can be invoked has to be thought through.
That mental exercise didn't happen here.




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux