On Tue, Oct 22, 2024 at 5:09 PM Namhyung Kim <namhyung@xxxxxxxxxx> wrote: > > Like in the software events, the BPF overflow handler can drop samples > by returning 0. Let's count the dropped samples here too. > > Acked-by: Kyle Huey <me@xxxxxxxxxxxx> > Cc: Alexei Starovoitov <ast@xxxxxxxxxx> > Cc: Andrii Nakryiko <andrii@xxxxxxxxxx> > Cc: Song Liu <song@xxxxxxxxxx> > Cc: bpf@xxxxxxxxxxxxxxx > Signed-off-by: Namhyung Kim <namhyung@xxxxxxxxxx> > --- > kernel/events/core.c | 4 +++- > 1 file changed, 3 insertions(+), 1 deletion(-) > > diff --git a/kernel/events/core.c b/kernel/events/core.c > index 5d24597180dec167..b41c17a0bc19f7c2 100644 > --- a/kernel/events/core.c > +++ b/kernel/events/core.c > @@ -9831,8 +9831,10 @@ static int __perf_event_overflow(struct perf_event *event, > ret = __perf_event_account_interrupt(event, throttle); > > if (event->prog && event->prog->type == BPF_PROG_TYPE_PERF_EVENT && > - !bpf_overflow_handler(event, data, regs)) > + !bpf_overflow_handler(event, data, regs)) { > + atomic64_inc(&event->dropped_samples); I don't see the full patch set (please cc relevant people and mailing list on each patch in the patch set), but do we really want to pay the price of atomic increment on what's the very typical situation of a BPF program returning 0? At least from a BPF perspective this is no "dropping sample", it's just processing it in BPF and not paying the overhead of the perf subsystem continuing processing it afterwards. So the dropping part is also misleading, IMO. > return ret; > + } > > /* > * XXX event_limit might not quite work as expected on inherited > -- > 2.47.0.105.g07ac214952-goog >