On Wed, 2007-11-14 at 22:50 +0100, Peter Zijlstra wrote: > Right, but I guess what Nick asked is, if pages could be stale to start > with, how is that avoided in the future. > > The way I understand it, this re-validate is just a best effort at > getting a coherent image. The normal convention for NFS is to use a close-to-open cache consistency model. In that model, applications must agree never to open the file for reading or writing if an application on a different NFS client already holds it open for writing. However there is no standard locking model for _enforcing_ such an agreement, so some setups do violate it. One obvious model that we try to support is that where the applications are using POSIX locking in order to ensure exclusive access to the data when requires. Another model is to rely rather on synchronous writes and heavy attribute revalidation to detect when a competing application has written to the file (the 'noac' mount option). While such a model is obviously deficient in that it can never guarantee cache coherency, we do attempt to ensure that it works on a per-operation basis (IOW: we check cache coherency before each call to read(), to mmap(), etc) since it is by far the easiest model to apply if you have applications that cannot be rewritten and that satisfy the requirement that they rarely conflict. Cheers Trond - To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html