On 21 Feb 2025, at 9:37, Jeff Layton wrote: > On Fri, 2025-02-21 at 09:06 -0500, Benjamin Coddington wrote: >> On 18 Feb 2025, at 9:40, Jeff Layton wrote: >> >>> On Tue, 2025-02-18 at 09:31 -0500, Chuck Lever wrote: >>>> On 2/18/25 9:29 AM, Jeff Layton wrote: >>>>> On Tue, 2025-02-18 at 08:58 -0500, Jeff Layton wrote: >>>>>> On Tue, 2025-02-18 at 21:54 +0800, Li Lingfeng wrote: >>>>>>> In nfsd4_run_cb, cl_cb_inflight is increased before attempting to queue >>>>>>> cb_work to callback_wq. This count can be decreased in three situations: >>>>>>> 1) If queuing fails in nfsd4_run_cb, the count will be decremented >>>>>>> accordingly. >>>>>>> 2) After cb_work is running, the count is decreased in the exception >>>>>>> branch of nfsd4_run_cb_work via nfsd41_destroy_cb. >>>>>>> 3) The count is decreased in the release callback of rpc_task — either >>>>>>> directly calling nfsd41_cb_inflight_end in nfsd4_cb_probe_release, or >>>>>>> calling nfsd41_destroy_cb in . >>>>>>> >>>>>>> However, in nfsd4_cb_release, if the current cb_work needs to restart, the >>>>>>> count will not be decreased, with the expectation that it will be reduced >>>>>>> once cb_work is running. >>>>>>> If queuing fails here, then the count will leak, ultimately causing the >>>>>>> nfsd service to be unable to exit as shown below: >>>>>>> [root@nfs_test2 ~]# cat /proc/2271/stack >>>>>>> [<0>] nfsd4_shutdown_callback+0x22b/0x290 >>>>>>> [<0>] __destroy_client+0x3cd/0x5c0 >>>>>>> [<0>] nfs4_state_destroy_net+0xd2/0x330 >>>>>>> [<0>] nfs4_state_shutdown_net+0x2ad/0x410 >>>>>>> [<0>] nfsd_shutdown_net+0xb7/0x250 >>>>>>> [<0>] nfsd_last_thread+0x15f/0x2a0 >>>>>>> [<0>] nfsd_svc+0x388/0x3f0 >>>>>>> [<0>] write_threads+0x17e/0x2b0 >>>>>>> [<0>] nfsctl_transaction_write+0x91/0xf0 >>>>>>> [<0>] vfs_write+0x1c4/0x750 >>>>>>> [<0>] ksys_write+0xcb/0x170 >>>>>>> [<0>] do_syscall_64+0x70/0x120 >>>>>>> [<0>] entry_SYSCALL_64_after_hwframe+0x78/0xe2 >>>>>>> [root@nfs_test2 ~]# >>>>>>> >>>>>>> Fix this by decreasing cl_cb_inflight if the restart fails. >>>>>>> >>>>>>> Fixes: cba5f62b1830 ("nfsd: fix callback restarts") >>>>>>> Signed-off-by: Li Lingfeng <lilingfeng3@xxxxxxxxxx> >>>>>>> --- >>>>>>> fs/nfsd/nfs4callback.c | 10 +++++++--- >>>>>>> 1 file changed, 7 insertions(+), 3 deletions(-) >>>>>>> >>>>>>> diff --git a/fs/nfsd/nfs4callback.c b/fs/nfsd/nfs4callback.c >>>>>>> index 484077200c5d..8a7d24efdd08 100644 >>>>>>> --- a/fs/nfsd/nfs4callback.c >>>>>>> +++ b/fs/nfsd/nfs4callback.c >>>>>>> @@ -1459,12 +1459,16 @@ static void nfsd4_cb_done(struct rpc_task *task, void *calldata) >>>>>>> static void nfsd4_cb_release(void *calldata) >>>>>>> { >>>>>>> struct nfsd4_callback *cb = calldata; >>>>>>> + struct nfs4_client *clp = cb->cb_clp; >>>>>>> + int queued; >>>>>>> >>>>>>> trace_nfsd_cb_rpc_release(cb->cb_clp); >>>>>>> >>>>>>> - if (cb->cb_need_restart) >>>>>>> - nfsd4_queue_cb(cb); >>>>>>> - else >>>>>>> + if (cb->cb_need_restart) { >>>>>>> + queued = nfsd4_queue_cb(cb); >>>>>>> + if (!queued) >>>>>>> + nfsd41_cb_inflight_end(clp); >>>>>>> + } else >>>>>>> nfsd41_destroy_cb(cb); >>>>>>> >>>>>>> } >>>>>> >>>>>> Good catch! >>>>>> >>>>>> Reviewed-by: Jeff Layton <jlayton@xxxxxxxxxx> >>>>>> >>>>> >>>>> Actually, I think this is not quite right. It's a bit more subtle than >>>>> it first appears. The problem of course is that the callback workqueue >>>>> jobs run in a different task than the RPC workqueue jobs, so they can >>>>> race. >>>>> >>>>> cl_cb_inflight gets bumped when the callback is first queued, and only >>>>> gets released in nfsd41_destroy_cb(). If it fails to be queued, it's >>>>> because something else has queued the workqueue job in the meantime. >>>>> >>>>> There are two places that can occur: nfsd4_cb_release() and >>>>> nfsd4_run_cb(). Since this is occurring in nfsd4_cb_release(), the only >>>>> other option is that something raced in and queued it via >>>>> nfsd4_run_cb(). >>>> >>>> What would be the "something" that raced in? >>>> >>> >>> I think we may be able to get there via multiple __break_lease() calls >>> on the same layout or delegation. That could mean multiple calls to the >>> ->lm_break operation on the same object. >> >> Sorry for the late response, but isn't ->lm_break() already serialized in >> __break_lease for the same file_lease? We don't call lm_break(fl) if >> lease_breaking(fl). >> > > lease_breaking() is only checked when want_write is false. IOW, if > you're breaking the lease for write, then lm_break is always called. > > Is that a bug or a feature? I'm not sure, but it's been that way since > ~2011. Yeah.. why? Thanks I missed that detail when I refreshed my memory of it just now. Seems like you'd want to avoid constantly calling lm_break for both cases, spamming the lock manager adds nothing. 2 cents. Ben