On Thu, Jul 27, 2023 at 12:00:10PM -0700, John Fastabend wrote: > Adam Sindelar wrote: > > We already provide ring_buffer__epoll_fd to enable use of external > > polling systems. However, the only API available to consume the ring > > buffer is ring_buffer__consume, which always checks all rings. When > > polling for many events, this can be wasteful. > > > > Signed-off-by: Adam Sindelar <adam@xxxxxxxxxxxx> > > --- > > v1->v2: Added entry to libbpf.map > > v2->v3: Correctly set errno and handle overflow > > v3->v4: Fixed an embarrasing typo from zealous autocomplete > > > > tools/lib/bpf/libbpf.h | 1 + > > tools/lib/bpf/libbpf.map | 1 + > > tools/lib/bpf/ringbuf.c | 22 ++++++++++++++++++++++ > > 3 files changed, 24 insertions(+) > > > > diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h > > index 55b97b2087540..20ccc65eb3f9d 100644 > > --- a/tools/lib/bpf/libbpf.h > > +++ b/tools/lib/bpf/libbpf.h > > @@ -1195,6 +1195,7 @@ LIBBPF_API int ring_buffer__add(struct ring_buffer *rb, int map_fd, > > ring_buffer_sample_fn sample_cb, void *ctx); > > LIBBPF_API int ring_buffer__poll(struct ring_buffer *rb, int timeout_ms); > > LIBBPF_API int ring_buffer__consume(struct ring_buffer *rb); > > +LIBBPF_API int ring_buffer__consume_ring(struct ring_buffer *rb, uint32_t ring_id); > > LIBBPF_API int ring_buffer__epoll_fd(const struct ring_buffer *rb); > > > > struct user_ring_buffer_opts { > > diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map > > index 9c7538dd5835e..42dc418b4672f 100644 > > --- a/tools/lib/bpf/libbpf.map > > +++ b/tools/lib/bpf/libbpf.map > > @@ -398,4 +398,5 @@ LIBBPF_1.3.0 { > > bpf_prog_detach_opts; > > bpf_program__attach_netfilter; > > bpf_program__attach_tcx; > > + ring_buffer__consume_ring; > > } LIBBPF_1.2.0; > > diff --git a/tools/lib/bpf/ringbuf.c b/tools/lib/bpf/ringbuf.c > > index 02199364db136..457469fc7d71e 100644 > > --- a/tools/lib/bpf/ringbuf.c > > +++ b/tools/lib/bpf/ringbuf.c > > @@ -290,6 +290,28 @@ int ring_buffer__consume(struct ring_buffer *rb) > > return res; > > } > > > > +/* Consume available data from a single RINGBUF map identified by its ID. > > + * The ring ID is returned in epoll_data by epoll_wait when called with > > + * ring_buffer__epoll_fd. > > + */ > > +int ring_buffer__consume_ring(struct ring_buffer *rb, uint32_t ring_id) > > +{ > > + struct ring *ring; > > + int64_t res; > > + > > + if (ring_id >= rb->ring_cnt) > > + return libbpf_err(-EINVAL); > > + > > + ring = &rb->rings[ring_id]; > > + res = ringbuf_process_ring(ring); > > + if (res < 0) > > + return libbpf_err(res); > > + > > + if (res > INT_MAX) > > + return INT_MAX; > > + return res; > > Why not just return int64_t here? Then skip the INT_MAX check? I would > just assume get the actual value if I was calling this. > Mainly for consistency with the existing API. So far, the comparable LIBBPF_API functions use int. It's hard to imagine that the number of records would exceed ~2 billion in a single call - I think the abberation is that ringbuf_process_ring using a 64-bit counter. If you do exceed INT_MAX records, something is probably wrong and maybe the function should return error instead. (But that would be outside the scope of this patch.) > > +} > > + > > /* Poll for available data and consume records, if any are available. > > * Returns number of records consumed (or INT_MAX, whichever is less), or > > * negative number, if any of the registered callbacks returned error. > > -- > > 2.39.2 > > > > > >