On Tue, Dec 22, 2020 at 07:09:33PM -0800, sdf@xxxxxxxxxx wrote: > On 12/22, Martin KaFai Lau wrote: > > On Thu, Dec 17, 2020 at 09:23:23AM -0800, Stanislav Fomichev wrote: > > > When we attach a bpf program to cgroup/getsockopt any other getsockopt() > > > syscall starts incurring kzalloc/kfree cost. While, in general, it's > > > not an issue, sometimes it is, like in the case of TCP_ZEROCOPY_RECEIVE. > > > TCP_ZEROCOPY_RECEIVE (ab)uses getsockopt system call to implement > > > fastpath for incoming TCP, we don't want to have extra allocations in > > > there. > > > > > > Let add a small buffer on the stack and use it for small (majority) > > > {s,g}etsockopt values. I've started with 128 bytes to cover > > > the options we care about (TCP_ZEROCOPY_RECEIVE which is 32 bytes > > > currently, with some planned extension to 64 + some headroom > > > for the future). > > > > > > It seems natural to do the same for setsockopt, but it's a bit more > > > involved when the BPF program modifies the data (where we have to > > > kmalloc). The assumption is that for the majority of setsockopt > > > calls (which are doing pure BPF options or apply policy) this > > > will bring some benefit as well. > > > > > > Signed-off-by: Stanislav Fomichev <sdf@xxxxxxxxxx> > > > --- > > > include/linux/filter.h | 3 +++ > > > kernel/bpf/cgroup.c | 41 +++++++++++++++++++++++++++++++++++++++-- > > > 2 files changed, 42 insertions(+), 2 deletions(-) > > > > > > diff --git a/include/linux/filter.h b/include/linux/filter.h > > > index 29c27656165b..362eb0d7af5d 100644 > > > --- a/include/linux/filter.h > > > +++ b/include/linux/filter.h > > > @@ -1281,6 +1281,8 @@ struct bpf_sysctl_kern { > > > u64 tmp_reg; > > > }; > > > > > > +#define BPF_SOCKOPT_KERN_BUF_SIZE 128 > > Since these 128 bytes (which then needs to be zero-ed) is modeled after > > the TCP_ZEROCOPY_RECEIVE use case, it will be useful to explain > > a use case on how the bpf prog will interact with > > getsockopt(TCP_ZEROCOPY_RECEIVE). > The only thing I would expect BPF program can do is to return EPERM > to cause application to fallback to non-zerocopy path (and, mostly, > bypass). I don't think BPF can meaningfully mangle the data in struct > tcp_zerocopy_receive. > > Does it address your concern? Or do you want me to add a comment or > something? I was asking because, while 128 byte may work best for TCP_ZEROCOPY_RECEIVCE, it is many unnecessary byte-zeroings for most options though. Hence, I am interested to see if there is a practical bpf use case for TCP_ZEROCOPY_RECEIVE.