On Tue, May 21, 2019 at 03:46:03PM +0000, Trond Myklebust wrote: > > A representative sample of stack traces from hung user-submitted > > processes (jobs). The first here is quite a lot more common than > > the later two: > > > > $ sudo cat /proc/197520/stack > > [<0>] io_schedule+0x12/0x40 > > [<0>] nfs_lock_and_join_requests+0x309/0x4c0 [nfs] > > [<0>] nfs_updatepage+0x2a2/0x8b0 [nfs] > > [<0>] nfs_write_end+0x63/0x4c0 [nfs] > > [<0>] generic_perform_write+0x138/0x1b0 > > [<0>] nfs_file_write+0xdc/0x200 [nfs] > > [<0>] new_sync_write+0xfb/0x160 > > [<0>] vfs_write+0xa5/0x1a0 > > [<0>] ksys_write+0x4f/0xb0 > > [<0>] do_syscall_64+0x53/0x100 > > [<0>] entry_SYSCALL_64_after_hwframe+0x44/0xa9 > > [<0>] 0xffffffffffffffff > > > > Have you tried upgrading to 4.19.44? There is a fix that went in not > too long ago that deals with a request leak that can cause stack traces > like the above that wait forever. > That I haven't tried. I gather you're talking about either or both of: 63b0ee126f7e be74fddc976e Which I do see went in after 4.19.24 (which I've tried) but didn't get in 4.20.9 (which I've also tried). Let me see about trying the 4.19.44 kernel. > By the way, the above stack trace with "nfs_lock_and_join_requests" > usually means that you are using a very small rsize or wsize (less than > 4k). Is that the case? If so, you might want to look into just > increasing the I/O size. > These exports have rsize and wsize set to 1048576. That decision was before my time, and I'll guess this value was picked to match NFSSVC_MAXBLKSIZE. Thank you for your help, -A -- Alan Post | Xen VPS hosting for the technically adept PO Box 61688 | Sunnyvale, CA 94088-1681 | https://prgmr.com/ email: adp@xxxxxxxxx