On Mon, 20 Sep 2010 14:25:36 -0400 "J. Bruce Fields" <bfields@xxxxxxxxxxxx> wrote: > On Mon, Sep 20, 2010 at 10:41:59AM -0400, Chuck Lever wrote: > > At one point long ago, I had asked Trond if we could get rid of the > > cache-invalidation-on-lock behavior if "-onolock" was in effect. He > > said at the time that this would eliminate the only recourse > > applications have for invalidating the data cache in case it was > > stale, and NACK'd the request. > > Argh. I guess I can see the argument, though. > > > I suggested introducing a new mount option called "llock" that would > > be semantically the same as "llock" on other operating systems, to do > > this. It never went anywhere. > > > > We now seem to have a fresh opportunity to address this issue with the > > recent addition of "local_lock". Can we augment this option or add > > another which allows better control of caching behavior during a file > > lock? > > I wouldn't stand in the way, but it does start to sound like a rather > confusing array of choices. > I can sort of see the argument too, but on the other hand...does anyone *really* use locks in this way? If we want a mechanism to allow the client to force cache invalidation on an inode it seems like we'd be better off with an interface for that purpose only (dare I say ioctl? :). Piggybacking this behavior into the locking interfaces seems like it punishes -o nolock performance for the benefit of some questionable usage patterns. Mixing this in with -o local_lock also seems confusing, but if we want to do that it's probably best to make that call before any kernels ship with -o local_lock. Trond, care to weigh in on this? -- Jeff Layton <jlayton@xxxxxxxxxx> -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html