RE: BPF ring buffer variable-length data appending

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Andrii Nakryiko wrote:
> We discussed this topic today at office hour. As I mentioned, I don't
> know the ideal solution, but here is something that has enough
> flexibility for real-world uses, while giving the performance and
> convenience of reserve/commit API. Ignore naming, we can bikeshed that
> later.

Missed office hours today, dang sounds interesting. Apoligies if I'm
missing some context.

> 
> So what we can do is introduce a new bpf_ringbuf_reserve() variant:
> 
> bpf_ringbuf_reserve_extra(void *ringbuf, __u64 size, __u64 flags, void
> *extra, __u64 extra_sz);
> 
> The idea is that we reserve a fixed size amount of data that can be
> used like it is today for filling a fixed-sized metadata/sample
> directly. But the real size of the reserved sample is (size +
> extra_sz), and bpf_ringbuf_reserve_extra() helper will bpf_probe_read
> (kernel or user, depending on flags) data from extra and put it right
> after the fixed-size part.
> 
> So the use would be something like:
> 
> struct my_meta *m = bpf_ringbuf_reserve_extra(&rb, sizeof(*m),
> BPF_RB_PROBE_USER, env_vars, 1024);
> 
> if (!m)
>     /* too bad, either probe_read_user failed or ringbuf is full */
>     return 1;
> 
> m->my_field1 = 123;
> m->my_field2 = 321;
> 

Is this a way to increase the reserved space? I'm a bit fuzzy on the
details here. But what happens if something else has done another
reserve? Then it fails I guess?

So consider,

CPU0                   CPU1

bpf_ringbuf_reserve()
                       bpf_ringbuf_reserve()
bpf_ringbuf_reserve_extra()

Does that *_reserve_extra() fail then? If so it seems very limited
from a use perspective. Most the systems we work with will fail more
often than not I think.

If the above doesn't fail, I'm missing something about how userspace
can know where that buffer is without a scatter-gather list.

> 
> So the main problem with this is that when probe_read fails, we fail
> reservation completely(internally we'd just discard ringbuf sample).
> Is that OK? Or is it better to still reserve fixed-sized part and
> zero-out the variable-length part? We are combining two separate
> operations into a single API, so error handling is more convoluted.

My $.02 here. Failing is going to be ugly and a real pain to deal
with. I think best approach is reserve a fixed-sized buffer that
meets your 99% case or whatever. Then reserve some overflow buffers
you can point to for the oddball java application with a million
strings. Yes you need to get more complicated in userspace to manage
the thing, but once that codes written everything works out.

Also I think we keep the kernel simpler if the BPF program just
does another reserve() if it needs more space so,

 bpf_ringbuf_reserve()
 copy
 copy
 ENOMEM <- buff is full,
 bpf_ringbuf_reserve()
 copy
 copy
 ....

Again userspace needs some logic to join the two buffers but we
could come up with some user side convention to do this. libbpf
for example could have a small buffer header to do this if folks
wanted.  Smart BPF programs can even reserve a couple buffers
up front for the worse case and recycle them back into its
next invocation, I think.

Conceptually, what is the difference between a second reserve
as compared to reserve_extra()?

> 
> 
> Now, the main use case requested was to be able to fetch an array of
> zero-terminated strings. I honestly don't think it's possible to
> implement this efficiently without two copies of string data. Mostly
> because to just determine the size of the string you have to read it
> one extra time. And you'd probably want to copy string data into some
> controlled storage first, so that you don't end up reading it once
> successfully and then failing to read it on the second try. Next, when
> you have multiple strings, how do you deal with partial failures? It's
> even worse in terms of error handling and error propagation than the
> fixed extra size variant described above.

+1 agree

> 
> Ignoring all that, let's say we'd implement
> bpf_ringbuf_reserve_extra_strs() helper, that would somehow be copying
> multiple zero-terminated strings after the fixed-size prefix. Think
> about implementation. Just to determine the total size of the ringbuf
> sample, you'd need to read all strings once, and probably also copy
> them locally.  Then you'd reserve a ringbuf sample and copy all that
> for the second time. So it's as inefficient as a BPF program
> constructing a single block of memory by reading all such strings
> manually into a per-CPU array and then using the above
> bpf_ringbuf_reserve_extra().

+1

> 
> But offloading that preparation to a BPF program bypasses all these
> error handling and memory layout questions. It will be up to a BPF
> program itself. From a kernel perspective, we just append a block of
> memory with known (at runtime) size.

Still missing how this would be different from multiple reserve()
calls. Its not too hard to join user space buffers I promise ;)

> 
> As a more restricted version of bpf_ringbuf_reserve_extra(), instead
> of allowing reading arbitrary kernel or user-space memory in
> bpf_ringbuf_reserve_extra() we can say that it has to be known and
> initialized memory (like MAP_VALUE pointer), so helper knows that it
> can just copy data directly.

This is a fairly common operation for us, but also just chunks of a map
value pointer. So would want a start/end offset bytes. Often our
map values have extra data that user space doesn't need or care about.

> 
> Thoughts?

I think I missed the point.

> 
> -- Andrii



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux