On Thu, May 30, 2019 at 02:39:58PM -0400, J. Bruce Fields wrote: > > > By the way, the above stack trace with "nfs_lock_and_join_requests" > > > usually means that you are using a very small rsize or wsize (less than > > > 4k). Is that the case? If so, you might want to look into just > > > increasing the I/O size. > > > > > > > These exports have rsize and wsize set to 1048576. > > Are you getting that from the mount commandline? It could be negotiated > down during mount. I think you can get the negotiated values form the > rsize= and wsize= values on the opts: line in /proc/self/mountstats. > See also /proc/fs/nfsd/max_block_size. > Great catch. I was reporting configuration from the mount command-line. I've spot checked /proc/self/mountstats and they report the same value, rsize and wsize of 1048576. I do have different values for here for NFS servers that are administratively outside of this cluster, where it is 65536, but in those cases we're not setting that option on the mount command-line and I am not experiencing the hang I report here to those servers. -A -- Alan Post | Xen VPS hosting for the technically adept PO Box 61688 | Sunnyvale, CA 94088-1681 | https://prgmr.com/ email: adp@xxxxxxxxx