On Wed, Sep 20, 2023 at 08:08:34PM +0800, D. Wythe wrote: > From: "D. Wythe" <alibuda@xxxxxxxxxxxxxxxxx> > > Consider the following scenarios: > > smc_release > smc_close_active > write_lock_bh(&smc->clcsock->sk->sk_callback_lock); > smc->clcsock->sk->sk_user_data = NULL; > write_unlock_bh(&smc->clcsock->sk->sk_callback_lock); > > smc_tcp_syn_recv_sock > smc = smc_clcsock_user_data(sk); > /* now */ > /* smc == NULL */ > > Hence, we may read the a NULL value in smc_tcp_syn_recv_sock(). And > since we only unset sk_user_data during smc_release, it's safe to > drop the incoming tcp reqsock. > > Fixes: ("net/smc: net/smc: Limit backlog connections" The tag above is malformed. The correct form is: Fixes: 8270d9c21041 ("net/smc: Limit backlog connections") > Signed-off-by: D. Wythe <alibuda@xxxxxxxxxxxxxxxxx> > --- > net/smc/af_smc.c | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c > index bacdd97..b4acf47 100644 > --- a/net/smc/af_smc.c > +++ b/net/smc/af_smc.c > @@ -125,6 +125,8 @@ static struct sock *smc_tcp_syn_recv_sock(const struct sock *sk, > struct sock *child; > > smc = smc_clcsock_user_data(sk); > + if (unlikely(!smc)) > + goto drop; > > if (READ_ONCE(sk->sk_ack_backlog) + atomic_read(&smc->queued_smc_hs) > > sk->sk_max_ack_backlog) > -- > 1.8.3.1 > >