On 23 Aug 2017, at 16:15, Trond Myklebust wrote: > On Wed, 2017-08-23 at 16:11 -0400, Benjamin Coddington wrote: >> Ping on this one as well -- it was buried in a thread: >> >> Ben >> >> On 2 Aug 2017, at 7:27, Benjamin Coddington wrote: >> >>> If the wait for a LOCK operation is interrupted, and then the file >>> is >>> closed, the locks cleanup code will assume that no new locks will >>> be >>> added >>> to the inode after it has completed. We already have a mechanism >>> to >>> detect >>> if there was signal, so let's use that to avoid recreating the >>> local >>> lock >>> once the RPC completes. >>> >>> Signed-off-by: Benjamin Coddington <bcodding@xxxxxxxxxx> >>> Reviewed-by: Jeff Layton <jlayton@xxxxxxxxxx> >>> --- >>> fs/nfs/nfs4proc.c | 2 +- >>> 1 file changed, 1 insertion(+), 1 deletion(-) >>> >>> diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c >>> index dbfa18900e25..5256f429c268 100644 >>> --- a/fs/nfs/nfs4proc.c >>> +++ b/fs/nfs/nfs4proc.c >>> @@ -6100,7 +6100,7 @@ static void nfs4_lock_done(struct rpc_task >>> *task, void *calldata) >>> case 0: >>> renew_lease(NFS_SERVER(d_inode(data->ctx- >>>> dentry)), >>> data->timestamp); >>> - if (data->arg.new_lock) { >>> + if (data->arg.new_lock && !data->cancelled) { >>> data->fl.fl_flags &= ~(FL_SLEEP | >>> FL_ACCESS); >>> if (locks_lock_inode_wait(lsp->ls_state- >>>> inode, &data->fl) < 0) { >>> rpc_restart_call_prepare(task); >>> > > Why do we want to special case '0'? Surely we don't want to restart the > RPC call for any of the error cases if data->cancelled is set. We don't want to add the local lock if data->cancelled is set. It's possible that the file has already been closed and the locks cleanup code has removed all of the local locks, so if this races in and adds a lock we end up with one left over. I don't understand what you mean about restarting the RPC call - we'd only restart the RPC call here if there was an error adding the local lock. Ben -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html