On Mon, Dec 9, 2024 at 8:04 PM D. Wythe <alibuda@xxxxxxxxxxxxxxxxx> wrote: > > +SEC("struct_ops/bpf_smc_set_tcp_option_cond") > +int BPF_PROG(bpf_smc_set_tcp_option_cond, const struct tcp_sock *tp, struct inet_request_sock *ireq) > +{ > + return 0; > +} > + > +SEC("struct_ops/bpf_smc_set_tcp_option") > +int BPF_PROG(bpf_smc_set_tcp_option, struct tcp_sock *tp) > +{ > + return 1; > +} > + > +SEC(".struct_ops.link") > +struct smc_ops sample_smc_ops = { > + .name = "sample", > + .set_option = (void *) bpf_smc_set_tcp_option, > + .set_option_cond = (void *) bpf_smc_set_tcp_option_cond, > +}; These stubs don't inspire confidence that smc_ops api will be sufficient. Please implement a real bpf prog that demonstrates the actual use case. See how bpf_cubic was done. On the day one it was implemented as a parity to builtin cubic cong control. And over years we didn't need to touch tcp_congestion_ops. To be fair that api was already solid due to in-kernel cc modules, but bpf comes with its own limitations, so it wasn't a guarantee that tcp_congestion_ops would be enough. Here you're proposing a brand new smc_ops api while bpf progs are nothing but stubs. That's not sufficient to prove that api is viable long term. In terms of look and feel the smc_ops look ok. The change from v1 to v2 was a good step. pw-bot: cr