Re: [PATCH] cifs: cifs_flush should wait for writeback to complete before proceeding

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 24 Sep 2010 14:36:19 -0500
Steve French <smfrench@xxxxxxxxx> wrote:

> On Fri, Sep 24, 2010 at 2:32 PM, Jeff Layton <jlayton@xxxxxxxxxx> wrote:
> > On Fri, 24 Sep 2010 14:24:31 -0500
> > Steve French <smfrench@xxxxxxxxx> wrote:
> >
> >> On Fri, Sep 24, 2010 at 2:08 PM, Jeff Layton <jlayton@xxxxxxxxxx> wrote:
> >> > On Fri, 24 Sep 2010 13:43:40 -0500
> >> > Steve French <smfrench@xxxxxxxxx> wrote:
> >> >
> >> >> It don't think it is a correctness issue - if close wants to do an
> >> >> fsync why do we have a flush routine at all? close is the only place
> >> >> flush is called.  This seems very wrong to require additional
> >> >> semantics beyond Unix semantics here (and slows close performance way
> >> >> down unnecessarily).  Even if we go async we would initiate i/o on
> >> >> these before we return close to the user - and we are not going to
> >> >> close the network handle of course until all network writes complete.
> >> >>
> >> >> At a minimum, we don't need to do an fsync (flush with wait) on close
> >> >> if there is more than one handle to that inode open - and should be
> >> >> able to just do flush
> >> >>
> >> >
> >> > What does this have to do with fsync? The flush operation is to flush
> >> > out data to the server prior to close. CIFS is not like a local fs or
> >> > even NFS. We have to have an open filehandle in order to write out
> >> > data.
> >>
> >> fsync is an fs file operation, handle based - so why do we need a distinct
> >> flush call if it has identical semantics?
> >>
> >>
> >
> > fsync has much more strict semantics than close. For many local
> > filesystems that implies a barrier or something equivalent to ensure
> > that it actually made it to the media. For the CIFS case, we also issue
> > a SMB_COM_FLUSH in the fsync case, but not for close.
> >
> > flush is for the close(2) call. Not all filesystems need to flush data
> > to disk on close. A local filesystem may be perfectly able to keep
> > the data in pagecache and write it out whenever it gets to it even
> > long after all file descriptors are closed.
> >
> > CIFS is not like that -- no filehandle == no writeback. Thus, we need to
> > wait for writeback to complete before allowing writable filehandles to
> > be closed.
> 
> ? We do that already - we don't close the last writeable filehandle until
> all writes complete.  If we had too - we could hold up the close of the
> user's last writeable file handle until all writes complete and
> we can close the last writeable cifs network handle - but we end up
> doing that already.
> 
> 

Can you point out where the code waits for writeback to complete before
the last writable filehandle is closed? I'm not seeing it...

-- 
Jeff Layton <jlayton@xxxxxxxxxx>
--
To unsubscribe from this list: send the line "unsubscribe linux-cifs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux