On Sun, 2009-02-22 at 19:23 -0600, Steve French wrote: > CIFS implementation of fsync flushed the cache on the client > (filemap_fdatawait etc.), but did not send the SMB FLUSH operation to > the server requesting that the server write all data for this inode to > the metal on the server side (until now). Some servers have a range > of configuration options to handle dumb applications that sync too > often (which is common on windows) or strange workloads e.g. "strict > sync = no" to "sync always" (in which the server issues fsync locally > after after SMB write is processed). > > The suggestion was to add a mount option on the cifs client > ("nostrictsync") to allow it to optionally just flush all of the > writes (and wait for write responses) but not force the server to sync > them to the metal (as a strict interpretation of sync would require). In NFS we always enforce the strict rule that we COMMIT on close(), but I do know that a lot of *BSD based implementations skip that step. As long as the server doesn't reboot, and/or the file is not shared with other clients, then that is a reasonable strategy. That said, I do also agree that you might want to enforce stricter rules by default, and then allow admins to relax them using a mount option. Speaking of cache consistency, what about the ability to flush caches? Is there any interest within the CIFS community to allow user applications to invalidate the page cache and/or attribute caches when no delegation/oplock is held and the app knows that a file or directory may have changed on the server? It is an issue that keeps getting raised on the NFS side, but that we haven't addressed yet. Cheers Trond -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html