On Thu, Jan 16, 2025 at 5:22 AM Martin KaFai Lau <martin.lau@xxxxxxxxx> wrote: > > On 1/12/25 3:37 AM, Jason Xing wrote: > > We will allow both TCP and UDP sockets to use this helper to > > enable this feature. So let SK_BPF_CB_FLAGS pass the check: > > 1. skip is_fullsock check > > 2. skip owned by me check > > > > Signed-off-by: Jason Xing <kerneljasonxing@xxxxxxxxx> > > --- > > net/core/filter.c | 27 +++++++++++++++++++++------ > > 1 file changed, 21 insertions(+), 6 deletions(-) > > > > diff --git a/net/core/filter.c b/net/core/filter.c > > index 1ac996ec5e0f..0e915268db5f 100644 > > --- a/net/core/filter.c > > +++ b/net/core/filter.c > > @@ -5507,12 +5507,9 @@ static int sol_ipv6_sockopt(struct sock *sk, int optname, > > KERNEL_SOCKPTR(optval), *optlen); > > } > > > > -static int __bpf_setsockopt(struct sock *sk, int level, int optname, > > - char *optval, int optlen) > > +static int ___bpf_setsockopt(struct sock *sk, int level, int optname, > > + char *optval, int optlen) > > { > > - if (!sk_fullsock(sk)) > > - return -EINVAL; > > - > > if (level == SOL_SOCKET) > > return sol_socket_sockopt(sk, optname, optval, &optlen, false); > > else if (IS_ENABLED(CONFIG_INET) && level == SOL_IP) > > @@ -5525,6 +5522,15 @@ static int __bpf_setsockopt(struct sock *sk, int level, int optname, > > return -EINVAL; > > } > > > > +static int __bpf_setsockopt(struct sock *sk, int level, int optname, > > + char *optval, int optlen) > > +{ > > + if (!sk_fullsock(sk)) > > + return -EINVAL; > > + > > + return ___bpf_setsockopt(sk, level, optname, optval, optlen); > > +} > > + > > static int _bpf_setsockopt(struct sock *sk, int level, int optname, > > char *optval, int optlen) > > { > > @@ -5675,7 +5681,16 @@ static const struct bpf_func_proto bpf_sock_addr_getsockopt_proto = { > > BPF_CALL_5(bpf_sock_ops_setsockopt, struct bpf_sock_ops_kern *, bpf_sock, > > int, level, int, optname, char *, optval, int, optlen) > > { > > - return _bpf_setsockopt(bpf_sock->sk, level, optname, optval, optlen); > > + struct sock *sk = bpf_sock->sk; > > + > > + if (optname != SK_BPF_CB_FLAGS) { > > + if (sk_fullsock(sk)) > > + sock_owned_by_me(sk); > > + else if (optname != SK_BPF_CB_FLAGS) > > This is redundant considering the outer "if" has the same check. > > Regardless, "optname != SK_BPF_CB_FLAGS" is not the right check. The new > callback (e.g. BPF_SOCK_OPS_TS_SCHED_OPT_CB) can still call > bpf_setsockopt(TCP_*) which will be broken without a lock. > > It needs to check for bpf_sock->op. I saw patch 5 has the bpf_sock->op check but > that check is also incorrect. I will comment in there together. Thanks. Will correct them soon. > > > + return -EINVAL; > > + } > > + > > + return ___bpf_setsockopt(sk, level, optname, optval, optlen); > > } > > > > static const struct bpf_func_proto bpf_sock_ops_setsockopt_proto = { >