> On Mon, 17 Jun 2024 06:15:25 -0400 Jeff Layton wrote: > > We've had number of these reports recently. I think I understand what's > > happening but I'm not sure how to fix it. The problem manifests as a > > stuck nfsd_mutex: > > > > nfsd_nl_rpc_status_get_start takes the nfsd_mutex, and it's released in > > nfsd_nl_rpc_status_get_done. These are the ->start and ->done > > operations for the rpc_status_get dumpit routine. > > > > I think syzbot is triggering one of the two "goto errout_skb" > > conditions in netlink_dump (not sure which). In those cases we end up > > returning from that function without calling ->done, which would lead > > to the hung mutex like we see here. > > > > Is this a bug in the netlink code, or is the rpc_status_get dumpit > > routine not using ->start and ->done correctly? > > Dumps are spread over multiple recvmsg() calls, even if we error out > the next recvmsg() will dump again, until ->done() is called. And we'll > call ->done() if socket is closed without reaching the end. > > But the multi-syscall nature puts us at the mercy of the user meaning > that holding locks ->start() to ->done() is a bit of a no-no. > Many of the dumps dump contents of an Xarray, so its easy to remember > an index and continue dumping from where we left off. I guess we can grab the nfsd_mutex lock in nfsd_nl_rpc_status_get_dumpit() and get rid of nfsd_nl_rpc_status_get_start() and nfsd_nl_rpc_status_get_done() completely. We will just verify the nfs server is running each time the dumpit callback is executed. What do you think? Regards, Lorenzo
Attachment:
signature.asc
Description: PGP signature