My reaction is that over modern networks 1 second attribute timeout is probably ok and 60 seconds is too slow (certainly slowing it down beyond 5 seconds could be noticeable to user, not just create a higher data integrity risk) - but the obvious exception is over slow networks. Since we can get the SMB echo times - we could base the value of actimeout 1 second vs 60 second based on what we estimate round trip delay is. On Mon, Nov 1, 2010 at 12:19 PM, Jeff Layton <jlayton@xxxxxxxxxx> wrote: > On Sun, 31 Oct 2010 22:44:13 +0300 > Pavel Shilovsky <piastryyy@xxxxxxxxx> wrote: > >> Add cifs_file_aio_read to let the client works with strict cache mode. >> >> Signed-off-by: Pavel Shilovsky <piastryyy@xxxxxxxxx> >> --- >> fs/cifs/cifsfs.c | 34 ++++++++++++++++++++++++++++++++-- >> 1 files changed, 32 insertions(+), 2 deletions(-) >> >> diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c >> index e8b9523..21e0f47 100644 >> --- a/fs/cifs/cifsfs.c >> +++ b/fs/cifs/cifsfs.c >> @@ -570,6 +570,36 @@ cifs_get_sb(struct file_system_type *fs_type, >> return 0; >> } >> >> +static ssize_t cifs_file_aio_read(struct kiocb *iocb, const struct iovec *iov, >> + unsigned long nr_segs, loff_t pos) >> +{ >> + struct inode *inode; >> + struct cifs_sb_info *cifs_sb; >> + ssize_t read; >> + >> + inode = iocb->ki_filp->f_path.dentry->d_inode; >> + cifs_sb = CIFS_SB(iocb->ki_filp->f_path.dentry->d_sb); >> + >> + if ((cifs_sb->mnt_cifs_flags & CIFS_MOUNT_STRICT_IO) == 0) { >> + int retval; >> + >> + retval = cifs_revalidate_file(iocb->ki_filp); >> + if (retval < 0) >> + return (loff_t)retval; >> + > ^^^^^^^^^^^^^^ > I don't think we can reasonably do the above without some discussion of > increasing the attribute cache timeout. It's currently hardcoded to 1s > in cifs_inode_needs_reval. > > Doing a QFileInfo every second in order to see if the file has changed > seems prohibitively expensive. OTOH, streaming reads without ever > checking to see whether it has changed is dangerous. We'll have to try > and strike a balance here. > > What may be best is to add a actimeo mount option that sets that value > to something sane (60s?) and allows people to set to whatever they > like. Even better might be a sliding scale like NFS uses -- set the > value to something low (and user-settable) initially and every time we > check it and it hasn't changed, double the timeout up to a maximum > (also settable by user). > >> + return generic_file_aio_read(iocb, iov, nr_segs, pos); >> + } >> + >> + if (!CIFS_I(inode)->clientCanCacheRead) { >> + read = cifs_user_read(iocb->ki_filp, iov->iov_base, >> + iov->iov_len, &pos); >> + iocb->ki_pos = pos; >> + } else >> + read = generic_file_aio_read(iocb, iov, nr_segs, pos); >> + >> + return read; >> +} >> + >> static ssize_t cifs_file_aio_write(struct kiocb *iocb, const struct iovec *iov, >> unsigned long nr_segs, loff_t pos) >> { >> @@ -692,7 +722,7 @@ const struct inode_operations cifs_symlink_inode_ops = { >> const struct file_operations cifs_file_ops = { >> .read = do_sync_read, >> .write = do_sync_write, >> - .aio_read = generic_file_aio_read, >> + .aio_read = cifs_file_aio_read, >> .aio_write = cifs_file_aio_write, >> .open = cifs_open, >> .release = cifs_close, >> @@ -729,7 +759,7 @@ const struct file_operations cifs_file_direct_ops = { >> const struct file_operations cifs_file_nobrl_ops = { >> .read = do_sync_read, >> .write = do_sync_write, >> - .aio_read = generic_file_aio_read, >> + .aio_read = cifs_file_aio_read, >> .aio_write = cifs_file_aio_write, >> .open = cifs_open, >> .release = cifs_close, > > > -- > Jeff Layton <jlayton@xxxxxxxxxx> > -- > To unsubscribe from this list: send the line "unsubscribe linux-cifs" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html > -- Thanks, Steve -- To unsubscribe from this list: send the line "unsubscribe linux-cifs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html