Re: [PATCH 2/4] CIFS: Invalidate inode pages on the last close

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 26 Oct 2010 09:46:34 -0500
Steve French <smfrench@xxxxxxxxx> wrote:

> On Tue, Oct 26, 2010 at 9:25 AM, Jeff Layton <jlayton@xxxxxxxxxx> wrote:
> > On Tue, 26 Oct 2010 17:01:26 +0400
> > Pavel Shilovsky <piastryyy@xxxxxxxxx> wrote:
> >
> >> When we close the last file handle of the inode we should invalidate it
> >> to prevent data coherency problem when we open it again but it has been
> >> modified by other clients.
> >>
> >> Signed-off-by: Pavel Shilovsky <piastryyy@xxxxxxxxx>
> >> ---
> >>  fs/cifs/file.c |   14 ++++++++++++--
> >>  1 files changed, 12 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/fs/cifs/file.c b/fs/cifs/file.c
> >> index d7c212a..02a045e 100644
> >> --- a/fs/cifs/file.c
> >> +++ b/fs/cifs/file.c
> >> @@ -150,8 +150,14 @@ client_can_cache:
> >>               pCifsInode->clientCanCacheAll = true;
> >>               pCifsInode->clientCanCacheRead = true;
> >>               cFYI(1, "Exclusive Oplock granted on inode %p", inode);
> >> -     } else if ((oplock & 0xF) == OPLOCK_READ)
> >> +     } else if ((oplock & 0xF) == OPLOCK_READ) {
> >> +             pCifsInode->clientCanCacheAll = false;
> >>               pCifsInode->clientCanCacheRead = true;
> >> +             cFYI(1, "Level II Oplock franted on inode %p", inode);
> >> +     } else {
> >> +             pCifsInode->clientCanCacheAll = false;
> >> +             pCifsInode->clientCanCacheRead = false;
> >> +     }
> >>
> >
> > The oplock handling in cifs is rather ad-hoc. There's a lot of
> > cut-and-pasted code that sets that clientCanCache flags. The flags
> > themselves are also sort of hard to understand -- i.e. why is is "All"
> > and not "Write"?
> >
> > This would be good opportunity to consolidate that code into helper
> > functions and just call them from the appropriate places.
> >
> > Perhaps a helper function that takes an oplock value and sets the
> > clientCanCache flags appropriately? Or...maybe consider doing away with
> > the canCache flags and just store the oplock value in the inode and
> > have helper macros or something that read that value and tell you what
> > can be done with it.
> >
> >   i.e.: CLIENT_CAN_CACHE_READ(inode), CLIENT_CAN_CACHE_WRITE(inode)
> 
> Remember that the reason we don't have a can_cache_write is that there
> is no equivalent smb/cifs protocol concept as "can_cache_write"
> independent of caching reads - there are basically three types:
> 1) batch oplocks (read/write and open/close caching) which we don't
> use (although will in smb2)
> 2) read/write
> 3) read
> 

I understand that, but from a coding standpoint I don't care. All I
care about is: "What can I do with the cached data attached to this
inode right now?"

That's why I think it might be saner to get rid of the bools
and keep the oplock flag in the inode and have macros that tell me what
I can actually do with it.

-- 
Jeff Layton <jlayton@xxxxxxxxxx>
--
To unsubscribe from this list: send the line "unsubscribe linux-cifs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux