On Thu, Dec 31, 2020 at 12:14:13PM -0800, sdf@xxxxxxxxxx wrote: > On 12/30, Martin KaFai Lau wrote: > > On Mon, Dec 21, 2020 at 02:22:41PM -0800, Song Liu wrote: > > > On Thu, Dec 17, 2020 at 9:24 AM Stanislav Fomichev <sdf@xxxxxxxxxx> > > wrote: > > > > > > > > When we attach a bpf program to cgroup/getsockopt any other > > getsockopt() > > > > syscall starts incurring kzalloc/kfree cost. While, in general, it's > > > > not an issue, sometimes it is, like in the case of > > TCP_ZEROCOPY_RECEIVE. > > > > TCP_ZEROCOPY_RECEIVE (ab)uses getsockopt system call to implement > > > > fastpath for incoming TCP, we don't want to have extra allocations in > > > > there. > > > > > > > > Let add a small buffer on the stack and use it for small (majority) > > > > {s,g}etsockopt values. I've started with 128 bytes to cover > > > > the options we care about (TCP_ZEROCOPY_RECEIVE which is 32 bytes > > > > currently, with some planned extension to 64 + some headroom > > > > for the future). > > > > > > I don't really know the rule of thumb, but 128 bytes on stack feels > > too big to > > > me. I would like to hear others' opinions on this. Can we solve the > > problem > > > with some other mechanisms, e.g. a mempool? > > It seems the do_tcp_getsockopt() is also having "struct > > tcp_zerocopy_receive" > > in the stack. I think the buf here is also mimicking > > "struct tcp_zerocopy_receive", so should not cause any > > new problem. > Good point! > > > However, "struct tcp_zerocopy_receive" is only 40 bytes now. I think it > > is better to have a smaller buf for now and increase it later when the > > the future needs in "struct tcp_zerocopy_receive" is also upstreamed. > I can lower it to 64. Or even 40? I think either is fine. Both will need another cacheline on bpf_sockopt_kern. 128 is a bit too much without a clear understanding on what "some headroom for the future" means.