We have a customer who hit a rare race in sunrpc (in a 3.0 based kernel, but the relevant code doesn't seem to have changed much). The thread that crashed was in xs_tcp_setup_socket -> inet_stream_connect -> lock_sock_nested. 'sock' in this last function is NULL. The only way I can imagine this happening is if some other thread called xs_close -> xs_reset_transport -> sock_release -> inet_release in a very small window a moment earlier. As far as I can tell, xs_close is only called with XPRT_LOCKED set. xs_tcp_setup_socket is mostly scheduled with XPRT_LOCKED set to which would exclude them from running at the same time. However xs_tcp_schedule_linger_timeout can schedule the thread which runs xs_tcp_setup_socket without first claiming XPRT_LOCKED. So I assume that is what is happening. I imagine some race between the client closing the socket, and getting TCP_FIN_WAIT1 from the server and somehow the two threads racing. I wonder if it might make sense to always abort 'connect_worker' in xs_close()? I think the connect_worker really mustn't be running or queued at this point, so cancelling it is either a no-op, or vitally important. So: does the following patch seem reasonable? If so I'll submit it properly with a coherent description etc. Thanks, NeilBrown diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c index ee03d35..b19ba53 100644 --- a/net/sunrpc/xprtsock.c +++ b/net/sunrpc/xprtsock.c @@ -835,6 +835,8 @@ static void xs_close(struct rpc_xprt *xprt) dprintk("RPC: xs_close xprt %p\n", xprt); + cancel_delayed_work_sync(&transport->connect_worker); + xs_reset_transport(transport); xprt->reestablish_timeout = 0; @@ -869,12 +871,8 @@ static void xs_local_destroy(struct rpc_xprt *xprt) */ static void xs_destroy(struct rpc_xprt *xprt) { - struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt); - dprintk("RPC: xs_destroy xprt %p\n", xprt); - cancel_delayed_work_sync(&transport->connect_worker); - xs_local_destroy(xprt); }
Attachment:
signature.asc
Description: PGP signature