On Tue, Mar 29, 2022 at 4:11 PM Beau Belgrave <beaub@xxxxxxxxxxxxxxxxxxx> wrote: > > On Tue, Mar 29, 2022 at 03:31:31PM -0700, Alexei Starovoitov wrote: > > On Tue, Mar 29, 2022 at 1:11 PM Beau Belgrave <beaub@xxxxxxxxxxxxxxxxxxx> wrote: > > > > > > On Tue, Mar 29, 2022 at 12:50:40PM -0700, Alexei Starovoitov wrote: > > > > On Tue, Mar 29, 2022 at 11:19 AM Beau Belgrave > > > > <beaub@xxxxxxxxxxxxxxxxxxx> wrote: > > > > > > > > > > Send user_event data to attached eBPF programs for user_event based perf > > > > > events. > > > > > > > > > > Add BPF_ITER flag to allow user_event data to have a zero copy path into > > > > > eBPF programs if required. > > > > > > > > > > Update documentation to describe new flags and structures for eBPF > > > > > integration. > > > > > > > > > > Signed-off-by: Beau Belgrave <beaub@xxxxxxxxxxxxxxxxxxx> > > > > > > > > The commit describes _what_ it does, but says nothing about _why_. > > > > At present I see no use out of bpf and user_events connection. > > > > The whole user_events feature looks redundant to me. > > > > We have uprobes and usdt. It doesn't look to me that > > > > user_events provide anything new that wasn't available earlier. > > > > > > A lot of the why, in general, for user_events is covered in the first > > > change in the series. > > > Link: https://lore.kernel.org/all/20220118204326.2169-1-beaub@xxxxxxxxxxxxxxxxxxx/ > > > > > > The why was also covered in Linux Plumbers Conference 2021 within the > > > tracing microconference. > > > > > > An example of why we want user_events: > > > Managed code running that emits data out via Open Telemetry. > > > Since it's managed there isn't a stub location to patch, it moves. > > > We watch the Open Telemetry spans in an eBPF program, when a span takes > > > too long we collect stack data and perform other actions. > > > With user_events and perf we can monitor the entire system from the root > > > container without having to have relay agents within each > > > cgroup/namespace taking up resources. > > > We do not need to enter each cgroup mnt space and determine the correct > > > patch location or the right version of each binary for processes that > > > use user_events. > > > > > > An example of why we want eBPF integration: > > > We also have scenarios where we are live decoding the data quickly. > > > Having user_data fed directly to eBPF lets us cast the data coming in to > > > a struct and decode very very quickly to determine if something is > > > wrong. > > > We can take that data quickly and put it into maps to perform further > > > aggregation as required. > > > We have scenarios that have "skid" problems, where we need to grab > > > further data exactly when the process that had the problem was running. > > > eBPF lets us do all of this that we cannot easily do otherwise. > > > > > > Another benefit from user_events is the tracing is much faster than > > > uprobes or others using int 3 traps. This is critical to us to enable on > > > production systems. > > > > None of it makes sense to me. > > Sorry. > > > To take advantage of user_events user space has to be modified > > and writev syscalls inserted. > > Yes, both user_events and lttng require user space modifications to do > tracing correctly. The syscall overheads are real, and the cost depends > on the mitigations around spectre/meltdown. > > > This is not cheap and I cannot see a production system using this interface. > > But you are fine with uprobe costs? uprobes appear to be much more costly > than a syscall approach on the hardware I've run on. > > > All you did is a poor man version of lttng that doesn't rely > > on such heavy instrumentation. > > Well I am a frugal person. :) > > This work has solved some critical issues we've been having, and I would > appreciate a review of the code if possible. It's a NACK to connect bpf and user_events. I would remove user_events from the kernel too.