On Wed, May 18, 2022 at 2:35 PM John Mazzie <john.p.mazzie@xxxxxxxxx> wrote: > > My group at Micron is using BPF and love the tracing capabilities it > provides. We are mainly focused on the storage subsystem and BPF has > been really helpful in understanding how the storage subsystem > interacts with our drives while running applications. > > In the process of developing a tool using BPF to trace the nvme > driver, we ran into an issue with some missing events. I wanted to > check to see if this is possibly a bug/limitation that I'm hitting or > if it's expected behavior with heavy tracing. We are trying to trace 2 > trace points (nvme_setup_cmd and nvme_complete_rq) around 1M times a > second. > We noticed if we just trace one of the two, we see all the expected > events, but if we trace both at the same time, the nvme_complete_rq kprobe programs have per-CPU reentrancy protection. That is, if some BPF kprobe/tracepoint program is running and something happens (e.g., BPF program calls some kernel function that has another BPF program attached to it, or preemption happens and another BPF program is supposed to run) that would trigger another BPF program, then that nested BPF program invocation will be skipped. This might be what happens in your case. > misses events. I am using two different percpu_hash maps to count both > events. One for setup and another for complete. My expectation was > that tracing these events would affect performance, somewhat, but not > miss events. Ultimately the tool would be used to trace nvme latencies > at the driver level by device and process. > > My tool was developed using libbpf v0.7, and I've tested on Rocky > Linux 8.5 (Kernel 4.18.0), Ubuntu 20.04 (Kernel 5.4) and Fedora 36 > (Kernel 5.17.6) with the same results. > > Thanks, > John Mazzie > Principal Storage Solutions Engineer > Micron Technology, Inc.