On Thu, Apr 29, 2021 at 6:05 AM Brendan Jackman <jackmanb@xxxxxxxxxx> wrote: > > One of our benchmarks running in (Google-internal) CI pushes data > through the ringbuf faster htan than userspace is able to consume > it. In this case it seems we're actually able to get >INT_MAX entries > in a single ringbuf_buffer__consume call. ASAN detected that cnt > overflows in this case. > > Fix by using 64-bit counter internally and then capping the result to > INT_MAX before converting to the int return type. > > Fixes: bf99c936f947 (libbpf: Add BPF ring buffer support) > Signed-off-by: Brendan Jackman <jackmanb@xxxxxxxxxx> > --- > > diff v1->v2: Now we don't break the loop at INT_MAX, we just cap the reported > entry count. > > Note: I feel a bit guilty about the fact that this makes the reader > think about implicit conversions. Nobody likes thinking about that. > > But explicit casts don't really help with clarity: > > return (int)min(cnt, (int64_t)INT_MAX); // ugh > I'd go with if (cnt > INT_MAX) return INT_MAX; return cnt; If you don't mind, I can patch it up while applying? > shrug.. > > tools/lib/bpf/ringbuf.c | 10 ++++++---- > 1 file changed, 6 insertions(+), 4 deletions(-) > > diff --git a/tools/lib/bpf/ringbuf.c b/tools/lib/bpf/ringbuf.c > index e7a8d847161f..2e114c2d0047 100644 > --- a/tools/lib/bpf/ringbuf.c > +++ b/tools/lib/bpf/ringbuf.c > @@ -204,7 +204,9 @@ static inline int roundup_len(__u32 len) > > static int ringbuf_process_ring(struct ring* r) > { > - int *len_ptr, len, err, cnt = 0; > + int *len_ptr, len, err; > + /* 64-bit to avoid overflow in case of extreme application behavior */ > + int64_t cnt = 0; > unsigned long cons_pos, prod_pos; > bool got_new_data; > void *sample; > @@ -240,7 +242,7 @@ static int ringbuf_process_ring(struct ring* r) > } > } while (got_new_data); > done: > - return cnt; > + return min(cnt, INT_MAX); > } > > /* Consume available ring buffer(s) data without event polling. > @@ -263,8 +265,8 @@ int ring_buffer__consume(struct ring_buffer *rb) > } > > /* Poll for available data and consume records, if any are available. > - * Returns number of records consumed, or negative number, if any of the > - * registered callbacks returned error. > + * Returns number of records consumed (or INT_MAX, whichever is less), or > + * negative number, if any of the registered callbacks returned error. > */ > int ring_buffer__poll(struct ring_buffer *rb, int timeout_ms) > { > -- > 2.31.1.498.g6c1eba8ee3d-goog >