wake_up_var() needs a barrier after the important change is made in the var and before wake_up_var() is called, else it is possible that a wake up won't be sent when it should. In each case here the var is changed in an "atomic" manner, so smb_mb__after_atomic() is sufficient. In one case the important change (removing the lease) is performed *after* the wake_up, which is backwards. The code survives in part because the wait_var_event is given a timeout. This patch adds the required barriers and calls destroy_delegation() *before* waking any threads waiting for the delegation to be destroyed. Signed-off-by: NeilBrown <neilb@xxxxxxx> --- fs/nfsd/nfs4state.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c index 3f85575ee0db..10e3a46289a1 100644 --- a/fs/nfsd/nfs4state.c +++ b/fs/nfsd/nfs4state.c @@ -4700,6 +4700,7 @@ void nfsd4_cstate_clear_replay(struct nfsd4_compound_state *cstate) if (so != NULL) { cstate->replay_owner = NULL; atomic_set(&so->so_replay.rp_locked, RP_UNLOCKED); + smb_mb__after_atomic(); wake_up_var(&so->so_replay.rp_locked); nfs4_put_stateowner(so); } @@ -5000,6 +5001,7 @@ move_to_close_lru(struct nfs4_ol_stateid *s, struct net *net) * so tell them to stop waiting. */ atomic_set(&oo->oo_owner.so_replay.rp_locked, RP_UNHASHED); + smb_mb__after_atomic(); wake_up_var(&oo->oo_owner.so_replay.rp_locked); wait_event(close_wq, refcount_read(&s->st_stid.sc_count) == 2); @@ -7469,8 +7471,9 @@ nfsd4_delegreturn(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate, goto put_stateid; trace_nfsd_deleg_return(stateid); - wake_up_var(d_inode(cstate->current_fh.fh_dentry)); destroy_delegation(dp); + smb_mb__after_atomic(); + wake_up_var(d_inode(cstate->current_fh.fh_dentry)); put_stateid: nfs4_put_stid(&dp->dl_stid); out: -- 2.44.0