On 29 May 2018, at 10:02, Trond Myklebust wrote: > On Thu, 2018-05-03 at 07:12 -0400, Benjamin Coddington wrote: >> If the wait for a LOCK operation is interrupted, and then the file is >> closed, the locks cleanup code will assume that no new locks will be >> added >> to the inode after it has completed. We already have a mechanism to >> detect >> if there was signal, so let's use that to avoid recreating the local >> lock >> once the RPC completes. Also skip re-sending the LOCK operation for >> the >> various error cases if we were signaled. >> >> Signed-off-by: Benjamin Coddington <bcodding@xxxxxxxxxx> >> --- >> fs/nfs/nfs4proc.c | 24 ++++++++++++++---------- >> 1 file changed, 14 insertions(+), 10 deletions(-) >> >> diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c >> index 47f3c273245e..1aba009a5ef8 100644 >> --- a/fs/nfs/nfs4proc.c >> +++ b/fs/nfs/nfs4proc.c >> @@ -6345,32 +6345,36 @@ static void nfs4_lock_done(struct rpc_task >> *task, void *calldata) >> case 0: >> renew_lease(NFS_SERVER(d_inode(data->ctx->dentry)), >> data->timestamp); >> - if (data->arg.new_lock) { >> + if (data->arg.new_lock && !data->cancelled) { >> data->fl.fl_flags &= ~(FL_SLEEP | >> FL_ACCESS); >> - if (locks_lock_inode_wait(lsp->ls_state- >>> inode, &data->fl) < 0) { >> - rpc_restart_call_prepare(task); >> + if (locks_lock_inode_wait(lsp->ls_state- >>> inode, &data->fl) > 0) > > AFAICS this will never be true; It looks to me as if > locks_lock_inode_wait() always returns '0' or a negative error value. > Am I missing something? No you're not missing anything, you're catching a typo, it should be: + if (locks_lock_inode_wait(lsp->ls_state-> inode, &data->fl) < 0) We want to break out of the switch and restart the rpc call if locks_lock_inode_wait returns an error. Should I send another version or can you fix it up? Ben -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html