On Wed, 2018-02-14 at 09:01 -0500, Scott Mayhew wrote: > Hi Trond, > > Commit ca0daa2 ("NFS: Cache aggressively when file is open for > writing") > removed the inode revalidation from do_setlk(). Why was that > necessary? > If just that part of the commit is added back in, the client still > seems > to be able to cope with out-of-order write replies. It can cope with out of order replies, but not with changes by a second client, which is highly likely when you are using file locking. In that case, we still need to invalidate the data cache. > Currently the client invalidates the data cache whenever it takes a > lock > and that causes performance problems for some workloads... if a Exactly which workloads? > client > wants to re-read portions of a file, and no other client has > modified > that file, then why should the reads go out on the wire just because > locking is being used? The point of the patch is to no longer track whether or not another client has changed the file while the file is open. That tracking was producing way too many false positives for no gain, and causing heavy slowdowns for performance optimised workloads due to spurious cache invalidations. Workloads that use locking are generally _not_ considered to be optimised for performance. IOW: The patch is restoring the behaviour of the locking code to the historically preferred one as described in the NFS FAQ. See http://nfs.sourceforge.net/#faq_a8 -- Trond Myklebust Linux NFS client maintainer, PrimaryData trond.myklebust@xxxxxxxxxxxxxxx ��.n��������+%������w��{.n�����{��w���jg��������ݢj����G�������j:+v���w�m������w�������h�����٥