On Tue, Mar 3, 2015 at 10:30 AM, Chuck Lever <chuck.lever@xxxxxxxxxx> wrote: > > On Mar 3, 2015, at 8:29 AM, Chris Perl <cperl@xxxxxxxxxxxxxx> wrote: > >> Just to quickly answer the earlier question about whether I have >> received a good answer from this list: Yes, I have, and I appreciate >> all the time and discussion spent helping me understand what I've been >> seeing. >> >>> I read Chris’s e-mail as a request for more detail in the answer >>> I proposed before. Maybe I’m wrong. >>> >>> Is this not sufficient: >>> >>> “Because NFS is not a cluster or “single system image” filesystem, >>> applications must provide proper serialization of reads and writes >>> among multiple clients to ensure correct application behavior and >>> prevent corruption of file data. The close-to-open mechanism is not >>> adequate in the presence of concurrent opens for write when multiple >>> clients are involved.” >>> >>> Perhaps the “presence of concurrent opens for write” should be >>> replaced with “presence of concurrent writes.” But what else needs >>> to be said? >> >> I'm trying to decide if, having read the above in the FAQ, it would >> have been obvious to me that what I was seeing was expected. > > That’s exactly the right question to ask. > >> I don't >> know that it would have been. That paragraph seems to impart to me >> the idea that if you don't synchronize your readers and writers you >> might get weird results, much like a local file system. However, I >> thought the way in which my updates were happening made this ok, as I >> was only ever appending data to a given file. >> >> I guess I was hoping for something more along the lines of (this would >> be meant to be inserted after paragraph 6 of A8 in the FAQ): >> >> Furthermore, in the presence of multiple clients, at least one of >> which is writing to a given file, close-to-open cache consistency >> makes no guarantee's about the data that might be returned from the >> cache on a client issuing reads concurrent with those writes. It may >> return stale data or it may return incorrect data. > > This works for me. Bruce, Trond, any thoughts? > I'd say we can give stronger advice. As I said earlier, close-to-open is a known model that is implemented by most NFS clients that want to cache data, so I see no reason not to go into more detail. If ever we change the Linux client behaviour in ways that differ, then we can no longer claim to be doing close-to-open. So here are the things we can document: 1) The client will revalidate its cached attributes and data on open(). If the file or directory has changed on the server, the data cache will be invalidated/flushed. 2) The client will revalidate its cached attributes on close(). 3) Between open() and close(), the behaviour w.r.t. cache revalidation is undefined. The client may on occasion check whether or not its cached data is valid, but applications must not rely on such behaviour. If the file is changed on the server during that time, then the behaviour of read()/readdir() on the client is undefined. In particular, note that read() may return stale data and/or holes. -- Trond Myklebust Linux NFS client maintainer, PrimaryData trond.myklebust@xxxxxxxxxxxxxxx -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html