On 12/21, Song Liu wrote:
On Thu, Dec 17, 2020 at 9:24 AM Stanislav Fomichev <sdf@xxxxxxxxxx> wrote: > > When we attach a bpf program to cgroup/getsockopt any other getsockopt() > syscall starts incurring kzalloc/kfree cost. While, in general, it's > not an issue, sometimes it is, like in the case of TCP_ZEROCOPY_RECEIVE. > TCP_ZEROCOPY_RECEIVE (ab)uses getsockopt system call to implement > fastpath for incoming TCP, we don't want to have extra allocations in > there. > > Let add a small buffer on the stack and use it for small (majority) > {s,g}etsockopt values. I've started with 128 bytes to cover > the options we care about (TCP_ZEROCOPY_RECEIVE which is 32 bytes > currently, with some planned extension to 64 + some headroom > for the future). > > It seems natural to do the same for setsockopt, but it's a bit more > involved when the BPF program modifies the data (where we have to > kmalloc). The assumption is that for the majority of setsockopt > calls (which are doing pure BPF options or apply policy) this > will bring some benefit as well. > > Signed-off-by: Stanislav Fomichev <sdf@xxxxxxxxxx>
Could you please share some performance numbers for this optimization?
We've found out about this problem by looking at our global google profiler, where TCP_ZEROCOPY_RECEIVE was showing up higher than usual. So I don't really have a nice reproducer, but I would assume I can try to run something like tools/testing/selftests/net/tcp_mmap.c under perf and see if there is a clear difference.