Re: [PATCH bpf-next 07/14] libbpf: add ring__avail_data_size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Sep 14, 2023 at 4:12 PM Martin Kelly
<martin.kelly@xxxxxxxxxxxxxxx> wrote:
>
> Add ring__avail_data_size for querying the currently available data in
> the ringbuffer, similar to the BPF_RB_AVAIL_DATA flag in
> bpf_ringbuf_query. This is racy during ongoing operations but is still
> useful for overall information on how a ringbuffer is behaving.
>
> Signed-off-by: Martin Kelly <martin.kelly@xxxxxxxxxxxxxxx>
> ---
>  tools/lib/bpf/libbpf.h   | 11 +++++++++++
>  tools/lib/bpf/libbpf.map |  1 +
>  tools/lib/bpf/ringbuf.c  |  5 +++++
>  3 files changed, 17 insertions(+)
>
> diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
> index 935162dbb3bf..87e3bad37737 100644
> --- a/tools/lib/bpf/libbpf.h
> +++ b/tools/lib/bpf/libbpf.h
> @@ -1279,6 +1279,17 @@ LIBBPF_API unsigned long ring__consumer_pos(const struct ring *r);
>   */
>  LIBBPF_API unsigned long ring__producer_pos(const struct ring *r);
>
> +/**
> + * @brief **ring__avail_data_size()** returns the number of bytes in this
> + * ringbuffer not yet consumed. This has no locking associated with it, so it
> + * can be inaccurate if operations are ongoing while this is called. However, it
> + * should still show the correct trend over the long-term.
> + *
> + * @param r A ring object.
> + * @return The number of bytes not yet consumed.
> + */
> +LIBBPF_API size_t ring__avail_data_size(const struct ring *r);
> +
>  struct user_ring_buffer_opts {
>         size_t sz; /* size of this struct, for forward/backward compatibility */
>  };
> diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
> index 1c532fe7a445..f66d7f0bc224 100644
> --- a/tools/lib/bpf/libbpf.map
> +++ b/tools/lib/bpf/libbpf.map
> @@ -401,6 +401,7 @@ LIBBPF_1.3.0 {
>                 bpf_program__attach_tcx;
>                 bpf_program__attach_uprobe_multi;
>                 ring_buffer__ring;
> +               ring__avail_data_size;
>                 ring__consumer_pos;
>                 ring__producer_pos;
>  } LIBBPF_1.2.0;
> diff --git a/tools/lib/bpf/ringbuf.c b/tools/lib/bpf/ringbuf.c
> index 54c596db57a4..f51ad1af6ab8 100644
> --- a/tools/lib/bpf/ringbuf.c
> +++ b/tools/lib/bpf/ringbuf.c
> @@ -350,6 +350,11 @@ unsigned long ring__producer_pos(const struct ring *r)
>         return smp_load_acquire(r->producer_pos);
>  }
>
> +size_t ring__avail_data_size(const struct ring *r)
> +{
> +       return ring__producer_pos(r) - ring__consumer_pos(r);

this might be ok as is, but if you look at kernel implementation, we
make sure to get consumer position first, and then producer position
second, then calculate difference. This is deliberately to avoid the
situation when consumer pos is greater than producer pos, which will
result in non-sensical negative (or huge) numbers.

Let's do the same, use two local variables, and have conservative
ordering: consumer, then producer.


> +}
> +
>  static void user_ringbuf_unmap_ring(struct user_ring_buffer *rb)
>  {
>         if (rb->consumer_pos) {
> --
> 2.34.1
>





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux