Re: User process NFS write hang followed by automount hang requiring reboot

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 2019-05-20 at 16:33 -0600, Alan Post wrote:
> I'm working on a compute cluster with approximately 300 NFS
> client machines running Linux 4.19.28[1].  These machines accept
> user submitted jobs which access exported filesystems from
> approximately a dozen NFS servers mostly running Linux 4.4.0 but
> a couple running 4.19.28.  In all cases we mount with nfsvers=4.
> 
> From time to time one of these user submitted jobs hangs in
> uninterruptible sleep (D state) while performing a write to one or
> more of these NFS servers, and never complete.  Once this happens
> calls to sync will themselves hang in uninterruptible sleep.
> Eventually the same thing happens to automount/mount.nfs and by
> that point the host is completely irrecoverable.
> 
> The problem is more common on our NFS clients when they’re
> communicating with an NFS server running 4.19.28, but is not
> unique to that NFS server kernel version.
> 
> A representative sample of stack traces from hung user-submitted
> processes (jobs).  The first here is quite a lot more common than
> the later two:
> 
>     $ sudo cat /proc/197520/stack
>     [<0>] io_schedule+0x12/0x40
>     [<0>] nfs_lock_and_join_requests+0x309/0x4c0 [nfs]
>     [<0>] nfs_updatepage+0x2a2/0x8b0 [nfs]
>     [<0>] nfs_write_end+0x63/0x4c0 [nfs]
>     [<0>] generic_perform_write+0x138/0x1b0
>     [<0>] nfs_file_write+0xdc/0x200 [nfs]
>     [<0>] new_sync_write+0xfb/0x160
>     [<0>] vfs_write+0xa5/0x1a0
>     [<0>] ksys_write+0x4f/0xb0
>     [<0>] do_syscall_64+0x53/0x100
>     [<0>] entry_SYSCALL_64_after_hwframe+0x44/0xa9
>     [<0>] 0xffffffffffffffff
> 

Have you tried upgrading to 4.19.44? There is a fix that went in not
too long ago that deals with a request leak that can cause stack traces
like the above that wait forever.

By the way, the above stack trace with "nfs_lock_and_join_requests"
usually means that you are using a very small rsize or wsize (less than
4k). Is that the case? If so, you might want to look into just
increasing the I/O size.

-- 
Trond Myklebust
Linux NFS client maintainer, Hammerspace
trond.myklebust@xxxxxxxxxxxxxxx






[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux