Re: Question about bpf perfbuf/ringbuf: pinned in backend with overwriting

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Dec 16, 2023 at 12:50 AM Dmitry Vyukov <dvyukov@xxxxxxxxxx> wrote:
>
> On Fri, 15 Dec 2023 at 23:39, Andrii Nakryiko <andrii.nakryiko@xxxxxxxxx> wrote:
> > > On 2023/12/14 07:35, Andrii Nakryiko wrote:
> > > > On Mon, Dec 11, 2023 at 4:39 AM Philo Lu <lulie@xxxxxxxxxxxxxxxxx> wrote:
> > > >>
> > > >>
> > > >>
> > > >> On 2023/12/9 06:32, Andrii Nakryiko wrote:
> > > >>> On Thu, Dec 7, 2023 at 6:49 AM Alan Maguire <alan.maguire@xxxxxxxxxx> wrote:
> > > >>>>
> > > >>>> On 07/12/2023 13:15, Philo Lu wrote:
> > > >>>>> Hi all. I have a question when using perfbuf/ringbuf in bpf. I will
> > > >>>>> appreciate it if you give me any advice.
> > > >>>>>
> > > >>>>> Imagine a simple case: the bpf program output a log (some tcp
> > > >>>>> statistics) to user every time a packet is received, and the user
> > > >>>>> actively read the logs if he wants. I do not want to keep a user process
> > > >>>>> alive, waiting for outputs of the buffer. User can read the buffer as
> > > >>>>> need. BTW, the order does not matter.
> > > >>>>>
> > > >>>>> To conclude, I hope the buffer performs like relayfs: (1) no need for
> > > >>>>> user process to receive logs, and the user may read at any time (and no
> > > >>>>> wakeup would be better); (2) old data can be overwritten by new ones.
> > > >>>>>
> > > >>>>> Currently, it seems that perfbuf and ringbuf cannot satisfy both: (i)
> > > >>>>> ringbuf: only satisfies (1). However, if data arrive when the buffer is
> > > >>>>> full, the new data will be lost, until the buffer is consumed. (ii)
> > > >>>>> perfbuf: only satisfies (2). But user cannot access the buffer after the
> > > >>>>> process who creates it (including perf_event.rb via mmap) exits.
> > > >>>>> Specifically, I can use BPF_F_PRESERVE_ELEMS flag to keep the
> > > >>>>> perf_events, but I do not know how to get the buffer again in a new
> > > >>>>> process.
> > > >>>>>
> > > >>>>> In my opinion, this can be solved by either of the following: (a) add
> > > >>>>> overwrite support in ringbuf (maybe a new flag for reserve), but we have
> > > >>>>> to address synchronization between kernel and user, especially under
> > > >>>>> variable data size, because when overwriting occurs, kernel has to
> > > >>>>> update the consumer posi too; (b) implement map_fd_sys_lookup_elem for
> > > >>>>> perfbuf to expose fds to user via map_lookup_elem syscall, and a
> > > >>>>> mechanism is need to preserve perf_event->rb when process exits
> > > >>>>> (otherwise the buffer will be freed by perf_mmap_close). I am not sure
> > > >>>>> if they are feasible, and which is better. If not, perhaps we can
> > > >>>>> develop another mechanism to achieve this?
> > > >>>>>
> > > >>>>
> > > >>>> There was an RFC a while back focused on supporting BPF ringbuf
> > > >>>> over-writing [1]; at the time, Andrii noted some potential issues that
> > > >>>> might be exposed by doing multiple ringbuf reserves to overfill the
> > > >>>> buffer within the same program.
> > > >>>>
> > > >>>
> > > >>> Correct. I don't think it's possible to correctly and safely support
> > > >>> overwriting with BPF ringbuf that has variable-sized elements.
> > > >>>
> > > >>> We'll need to implement MPMC ringbuf (probably with fixed sized
> > > >>> element size) to be able to support this.
> > > >>>
> > > >>
> > > >> Thank you very much!
> > > >>
> > > >> If it is indeed difficult with ringbuf, maybe I can implement a new type
> > > >> of bpf map based on relay interface [1]? e.g., init relay during map
> > > >> creating, write into it with bpf helper, and then user can access to it
> > > >> in filesystem. I think it will be a simple but useful map for
> > > >> overwritable data transfer.
> > > >
> > > > I don't know much about relay, tbh. Give it a try, I guess.
> > > > Alternatively, we need better and faster implementation of
> > > > BPF_MAP_TYPE_QUEUE, which seems like the data structure that can
> > > > support overwriting and generally be a fixed elementa size
> > > > alternative/complement to BPF ringbuf.
> > > >
> > >
> > > Thank you for your reply. I am afraid BPF_MAP_TYPE_QUEUE cannot get rid
> > > of locking overheads with concurrent reading and writing by design, and
> >
> > I disagree, I think [0] from Dmitry Vyukov is one way to implement
> > lock-free BPF_MAP_TYPE_QUEUE. I don't know how easy it would be to
> > implement overwriting support, but it would be worth considering.
> >
> >   [0] https://www.1024cores.net/home/lock-free-algorithms/queues/bounded-mpmc-queue
>
>
> I am missing some context here. But note that this queue is not
> formally lock-free. While it's usually faster and more scalable than
> mutex-protected queues, stuck readers and writers will eventually
> block each other. Stucking for a short time is not a problem because
> the queue allows parallelism for both readers and writers. But if
> threads get stuck for a long time and the queue wraps around so that
> writers try to write to elements being read/written by slow threads,
> they block. Similarly, readers get blocked by slow writers even if
> there are other fully written elements in the queue already.
> The queue is not serializable either, which may be surprisable in some cases.

Thanks for additional insights, Dmitry!

In our case producers will be either BPF programs or done by bpf()
syscall in the kernel, so the expectation is that they will be fast
and will be guaranteed to run to completion. (We can decide whether
sleepable/faultable BPF programs should be allowed to work with this
QUEUE or not). For consuming, the main target is probably user-space,
and probably we'd want to be able to do this without a syscall through
mmaping. If the user is slow, on the producer side we can perhaps just
fail to enqueue a new element (not sure how easy it is to tell "slow
consumer" vs "no consumer, we are full"?)

Anyways, I think it's an interesting algorithm, I stumbled upon it a
while ago and was always curious how it would fit BPF use cases :)

>
> Adding overwriting support may be an interesting exercise.
> I guess readers could use some variation of a seqlock to deal with
> elements that are being overwritten.

One way I was thinking would be to remember sequence number before
reading data, read data, and then re-read sequence number. If it
changed, user can discard because data was modified. If not, then we
have a guarantee that data was intact for the entire duration of the
read operation.

> Writers can already skip over other slow writers. Normally this is
> used w/o wrap-around, but I suspect it can just work with wrap-around
> as well (a writer can skip over a writer stuck on the previous lap).
> Since we overwrite elements, the queue provides only a very weak
> notion of FIFO anyway, so skipping over very old writers may be fine.

Exactly, it's not really FIFO (so perhaps literally retrofitting it
into BPF_MAP_TYPE_QUEUE might not be the best idea, maybe it would be
a new map type), so overwriting is like some consumer quickly consumed
(and discarded) an element, and then wrote over it some new
information. That was how my thinking went.

The devil is in details and fitting all this end-to-end into BPF
subsystem, of course.





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux