On Wed, 14 Oct 2015, Jeff Layton wrote: > On Wed, 14 Oct 2015 14:23:37 -0400 > Benjamin Coddington <bcodding@xxxxxxxxxx> wrote: > > > NFS unlock procedures will wait for IO to complete before sending an unlock. > > In the case that this wait is interrupted, an unlock may never be sent if > > the unlock is part of cleaning up locks during a close. This lost lock can > > then prevent other clients from locking the file. > > > > Fix this by deferring an unlock that should wait for IO during FL_CLOSE by > > copying it to a list on the nfs_lock_context, which can then be used to > > release the lock when the IO has completed. > > > > Signed-off-by: Benjamin Coddington <bcodding@xxxxxxxxxx> > > --- > > fs/nfs/file.c | 36 +++++++++++++++++++++++++++++++++++- > > fs/nfs/inode.c | 1 + > > fs/nfs/pagelist.c | 23 ++++++++++++++++++++--- > > include/linux/nfs_fs.h | 7 +++++++ > > 4 files changed, 63 insertions(+), 4 deletions(-) > > > > diff --git a/fs/nfs/file.c b/fs/nfs/file.c > > index d16c50f..460311a 100644 > > --- a/fs/nfs/file.c > > +++ b/fs/nfs/file.c > > @@ -738,6 +738,36 @@ out_noconflict: > > } > > > > static int > > +defer_unlk(struct nfs_lock_context *l_ctx, int cmd, struct file_lock *fl) > > +{ > > + struct inode *inode = d_inode(l_ctx->open_context->dentry); > > + struct nfs_io_counter *c = &l_ctx->io_count; > > + struct nfs_deferred_unlock *dunlk; > > + int status = 0; > > + > > + if (atomic_read(&c->io_count) == 0) > > + return 0; > > + > > + /* free in nfs_iocounter_dec */ > > + dunlk = kmalloc(sizeof(*dunlk), GFP_NOFS); > > + if (dunlk == NULL) > > + return -ENOMEM; > > + > > This is a little ugly... > You're probably going to calling this from something like > locks_remove_posix, and if this allocation fails then the unlock will > just never happen. > > Is there any way to avoid this allocation? Yes! As you go on to suggest.. > The "cmd" field in nfs_deferred_unlock is more or less redundant. We're > always calling this with that set to F_UNLCK. We also know that this > will be called on the whole file range. Maybe we can simply add a flag > to the lock context to indicate whether we should send a whole-file > unlock on it when the io_count goes to zero. That simplifies things quite a bit.. I'm going to resubmit this with that approach. Thanks! > Also, on a somewhat related note...we aren't currently setting FL_CLOSE > in locks_remove_flock and we probably should be. I'll add that as well. Ben -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html