On Mon, Oct 10, 2011 at 05:59:11PM -0400, bfields wrote: > On Wed, Sep 21, 2011 at 10:58:13AM -0400, J. Bruce Fields wrote: > > In setlease, we use i_writecount to decide whether we can give out a > > read lease. > > > > In open, we break leases before incrementing i_writecount. > > > > There is therefore a window between the break lease and the i_writecount > > increment when setlease could add a new read lease. > > > > This would leave us with a simultaneous write open and read lease, which > > shouldn't happen. > > Al, could you apply this for 3.2, if you don't see any problem? Ping? What should I do with this patch? --b. > > (Patch 1 of this series only touches locks.c, and I'm queueing up such > patches through the nfsd tree. Patches 3-6 I intend to rewrite, > probably not in time for 3.2 unless I'm very lucky.) > > --b. > > > > > Signed-off-by: J. Bruce Fields <bfields@xxxxxxxxxx> > > --- > > fs/namei.c | 5 +---- > > fs/open.c | 4 ++++ > > 2 files changed, 5 insertions(+), 4 deletions(-) > > > > diff --git a/fs/namei.c b/fs/namei.c > > index 2826db3..6ff59e5 100644 > > --- a/fs/namei.c > > +++ b/fs/namei.c > > @@ -2044,10 +2044,7 @@ static int may_open(struct path *path, int acc_mode, int flag) > > if (flag & O_NOATIME && !inode_owner_or_capable(inode)) > > return -EPERM; > > > > - /* > > - * Ensure there are no outstanding leases on the file. > > - */ > > - return break_lease(inode, flag); > > + return 0; > > } > > > > static int handle_truncate(struct file *filp) > > diff --git a/fs/open.c b/fs/open.c > > index f711921..22c41b5 100644 > > --- a/fs/open.c > > +++ b/fs/open.c > > @@ -685,6 +685,10 @@ static struct file *__dentry_open(struct dentry *dentry, struct vfsmount *mnt, > > if (error) > > goto cleanup_all; > > > > + error = break_lease(inode, f->f_flags); > > + if (error) > > + goto cleanup_all; > > + > > if (!open && f->f_op) > > open = f->f_op->open; > > if (open) { > > -- > > 1.7.4.1 > > -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html