On Fri, 24 Sep 2010 13:43:40 -0500 Steve French <smfrench@xxxxxxxxx> wrote: > It don't think it is a correctness issue - if close wants to do an > fsync why do we have a flush routine at all? close is the only place > flush is called. This seems very wrong to require additional > semantics beyond Unix semantics here (and slows close performance way > down unnecessarily). Even if we go async we would initiate i/o on > these before we return close to the user - and we are not going to > close the network handle of course until all network writes complete. > > At a minimum, we don't need to do an fsync (flush with wait) on close > if there is more than one handle to that inode open - and should be > able to just do flush > What does this have to do with fsync? The flush operation is to flush out data to the server prior to close. CIFS is not like a local fs or even NFS. We have to have an open filehandle in order to write out data. So, we have no choice but to ensure that all of the data is written out to the server prior to allowing the filehandle to be closed. Waiting for writeback to complete is mandatory here. If not, then there is no place for that data to go but into the bit-bucket. You also can't reasonably try to get clever and only wait for the "last" filehandle to be closed. Any check you do there will be racy. Suppose we have 2 filps being closed simultaneously and they're both in .flush at the same time? How do you ensure that one of them will stick around and wait for the flush to complete? -- Jeff Layton <jlayton@xxxxxxxxxx> -- To unsubscribe from this list: send the line "unsubscribe linux-cifs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html