Re: CPU lockup in or near new filecache code

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 2019-12-11 at 15:01 -0500, Chuck Lever wrote:
> OK, I finally got a hit. It took a long time. I've seen this
> particular
> stack trace before, several times.
> 
> Dec 11 14:58:34 klimt kernel: watchdog: BUG: soft lockup - CPU#0
> stuck for 22s! [nfsd:2005]
> Dec 11 14:58:34 klimt kernel: Modules linked in: rpcsec_gss_krb5
> ocfs2_dlmfs ocfs2_stack_o2cb ocfs2_dlm ocfs2_nodemanager
> ocfs2_stackglue ib_umad ib_ipoib mlx4_ib sb_edac x86_pkg_temp_thermal
> kvm_intel coretemp kvm irqbypass crct10dif_pclmul crc32_pclmul
> ghash_clmulni_intel iTCO_wdt ext4 iTCO_vendor_support aesni_intel
> mbcache jbd2 glue_helper rpcrdma crypto_simd cryptd rdma_ucm ib_iser
> rdma_cm pcspkr iw_cm ib_cm mei_me raid0 libiscsi lpc_ich mei sg
> scsi_transport_iscsi i2c_i801 mfd_core wmi ipmi_si ipmi_devintf
> ipmi_msghandler ioatdma acpi_power_meter nfsd nfs_acl lockd
> auth_rpcgss grace sunrpc ip_tables xfs libcrc32c mlx4_en sr_mod
> sd_mod cdrom qedr ast drm_vram_helper drm_ttm_helper ttm crc32c_intel
> drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops drm igb
> dca i2c_algo_bit i2c_core mlx4_core ahci libahci libata nvme
> nvme_core qede qed dm_mirror dm_region_hash dm_log dm_mod crc8
> ib_uverbs dax ib_core
> Dec 11 14:58:34 klimt kernel: CPU: 0 PID: 2005 Comm: nfsd Tainted:
> G        W         5.5.0-rc1-00003-g170e7adc2317 #1401
> Dec 11 14:58:34 klimt kernel: Hardware name: Supermicro Super
> Server/X10SRL-F, BIOS 1.0c 09/09/2015
> Dec 11 14:58:34 klimt kernel: RIP: 0010:__srcu_read_lock+0x23/0x24
> Dec 11 14:58:34 klimt kernel: Code: 07 00 0f 1f 40 00 c3 0f 1f 44 00
> 00 8b 87 c8 c3 00 00 48 8b 97 f0 c3 00 00 83 e0 01 48 63 c8 65 48 ff
> 04 ca f0 83 44 24 fc 00 <c3> 0f 1f 44 00 00 f0 83 44 24 fc 00 48 63
> f6 48 8b 87 f0 c3 00 00
> Dec 11 14:58:34 klimt kernel: RSP: 0018:ffffc90001d97bd0 EFLAGS:
> 00000246 ORIG_RAX: ffffffffffffff13
> Dec 11 14:58:34 klimt kernel: RAX: 0000000000000001 RBX:
> ffff888830d0eb78 RCX: 0000000000000001
> Dec 11 14:58:34 klimt kernel: RDX: 0000000000030f00 RSI:
> ffff888853f4da00 RDI: ffffffff82815a40
> Dec 11 14:58:34 klimt kernel: RBP: ffff88883112d828 R08:
> ffff888843540000 R09: ffffffff8121d707
> Dec 11 14:58:34 klimt kernel: R10: ffffc90001d97bf0 R11:
> 0000000000001b84 R12: ffff888853f4da00
> Dec 11 14:58:34 klimt kernel: R13: ffff8888132a1410 R14:
> ffff88883112d7e0 R15: 00000000ffffffef
> Dec 11 14:58:34 klimt kernel: FS:  0000000000000000(0000)
> GS:ffff88885fc00000(0000) knlGS:0000000000000000
> Dec 11 14:58:34 klimt kernel: CS:  0010 DS: 0000 ES: 0000 CR0:
> 0000000080050033
> Dec 11 14:58:34 klimt kernel: CR2: 00007f2d6a2d8000 CR3:
> 0000000859b38004 CR4: 00000000001606f0
> Dec 11 14:58:34 klimt kernel: Call Trace:
> Dec 11 14:58:34 klimt kernel: fsnotify_grab_connector+0x16/0x4f
> Dec 11 14:58:34 klimt kernel: fsnotify_find_mark+0x11/0x6a
> Dec 11 14:58:34 klimt kernel: nfsd_file_acquire+0x3a9/0x5b2 [nfsd]
> Dec 11 14:58:34 klimt kernel: nfs4_get_vfs_file+0x14c/0x20f [nfsd]
> Dec 11 14:58:34 klimt kernel: nfsd4_process_open2+0xcd6/0xd98 [nfsd]
> Dec 11 14:58:34 klimt kernel: ? fh_verify+0x42e/0x4ef [nfsd]
> Dec 11 14:58:34 klimt kernel: ? nfsd4_process_open1+0x233/0x29d
> [nfsd]
> Dec 11 14:58:34 klimt kernel: nfsd4_open+0x500/0x5cb [nfsd]
> Dec 11 14:58:34 klimt kernel: nfsd4_proc_compound+0x32a/0x5c7 [nfsd]
> Dec 11 14:58:34 klimt kernel: nfsd_dispatch+0x102/0x1e2 [nfsd]
> Dec 11 14:58:34 klimt kernel: svc_process_common+0x3b3/0x65d [sunrpc]
> Dec 11 14:58:34 klimt kernel: ? svc_xprt_put+0x12/0x21 [sunrpc]
> Dec 11 14:58:34 klimt kernel: ? nfsd_svc+0x2be/0x2be [nfsd]
> Dec 11 14:58:34 klimt kernel: ? nfsd_destroy+0x51/0x51 [nfsd]
> Dec 11 14:58:34 klimt kernel: svc_process+0xf6/0x115 [sunrpc]
> Dec 11 14:58:34 klimt kernel: nfsd+0xf2/0x149 [nfsd]
> Dec 11 14:58:34 klimt kernel: kthread+0xf6/0xfb
> Dec 11 14:58:34 klimt kernel: ? kthread_queue_delayed_work+0x74/0x74
> Dec 11 14:58:34 klimt kernel: ret_from_fork+0x3a/0x50
> 

Does something like the following help?

8<---------------------------------------------------
>From caf515c82ed572e4f92ac8293e5da4818da0c6ce Mon Sep 17 00:00:00 2001
From: Trond Myklebust <trond.myklebust@xxxxxxxxxxxxxxx>
Date: Fri, 13 Dec 2019 15:07:33 -0500
Subject: [PATCH] nfsd: Fix a soft lockup race in
 nfsd_file_mark_find_or_create()

If nfsd_file_mark_find_or_create() keeps winning the race for the
nfsd_file_fsnotify_group->mark_mutex against nfsd_file_mark_put()
then it can soft lock up, since fsnotify_add_inode_mark() ends
up always finding an existing entry.

Signed-off-by: Trond Myklebust <trond.myklebust@xxxxxxxxxxxxxxx>
---
 fs/nfsd/filecache.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/fs/nfsd/filecache.c b/fs/nfsd/filecache.c
index 9c2b29e07975..f275c11c4e28 100644
--- a/fs/nfsd/filecache.c
+++ b/fs/nfsd/filecache.c
@@ -132,9 +132,13 @@ nfsd_file_mark_find_or_create(struct nfsd_file *nf)
 						 struct nfsd_file_mark,
 						 nfm_mark));
 			mutex_unlock(&nfsd_file_fsnotify_group->mark_mutex);
-			fsnotify_put_mark(mark);
-			if (likely(nfm))
+			if (nfm) {
+				fsnotify_put_mark(mark);
 				break;
+			}
+			/* Avoid soft lockup race with nfsd_file_mark_put() */
+			fsnotify_destroy_mark(mark, nfsd_file_fsnotify_group);
+			fsnotify_put_mark(mark);
 		} else
 			mutex_unlock(&nfsd_file_fsnotify_group->mark_mutex);
 
-- 
2.23.0


-- 
Trond Myklebust
Linux NFS client maintainer, Hammerspace
trond.myklebust@xxxxxxxxxxxxxxx






[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux