[PATCH v2 0/4] Fix softlockup when adding inotify watch

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Al et al,

When a system with large amounts of memory has several millions of
negative dentries in a single directory, a softlockup can occur while
adding an inotify watch:

 watchdog: BUG: soft lockup - CPU#20 stuck for 9s! [inotifywait:9528]
 CPU: 20 PID: 9528 Comm: inotifywait Kdump: loaded Not tainted 5.16.0-rc4.20211208.el8uek.rc1.x86_64 #1
 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.4.1 12/03/2020
 RIP: 0010:__fsnotify_update_child_dentry_flags+0xad/0x120
 Call Trace:
  <TASK>
  fsnotify_add_mark_locked+0x113/0x160
  inotify_new_watch+0x130/0x190
  inotify_update_watch+0x11a/0x140
  __x64_sys_inotify_add_watch+0xef/0x140
  do_syscall_64+0x3b/0x90
  entry_SYSCALL_64_after_hwframe+0x44/0xae

This patch series is a modified version of the following:
https://lore.kernel.org/linux-fsdevel/1611235185-1685-1-git-send-email-gautham.ananthakrishna@xxxxxxxxxx/

The strategy employed by this series is to move negative dentries to the
end of the d_subdirs list, and mark them with a flag as "tail negative".
Then, readers of the d_subdirs list, which are only interested in
positive dentries, can stop reading once they reach the first tail
negative dentry. By applying this patch, I'm able to avoid the above
softlockup caused by 200 million negative dentries on my test system.
Inotify watches are set up nearly instantly.

Previously, Al expressed concern for:

1. Possible memory corruption due to use of lock_parent() in
sweep_negative(), see patch 01 for fix.
2. The previous patch didn't catch all ways a negative dentry could
become positive (d_add, d_instantiate_new), see patch 01.
3. The previous series contained a new negative dentry limit, which
capped the negative dentry count at around 3 per hash bucket. I've
dropped this patch from the series.

Patches 2-4 are unmodified from the previous posting.

In v1 of the patch, a warning was triggered by patch 1:
https://lore.kernel.org/linux-fsdevel/20211218081736.GA1071@xsang-OptiPlex-9020/

I reproduced this warning, and verified it no longer occurs with my patch on
5.17 rc kernels. In particular, commit 29044dae2e74 ("fsnotify: fix fsnotify
hooks in pseudo filesystems") resolves the warning, which I verified on the
5.16 branch that the 0day bot tested. It seems that nfsdfs was using d_delete
to remove some pseudo-filesystem dentries, rather than d_drop, but it
expected there to never be negative dentries. I don't believe that
warning reflected an error in this patch series.

v2:
- explain the nfsd warning
- remove sweep_negative() call from __d_add - rely on dput() for that


Konstantin Khlebnikov (2):
  dcache: add action D_WALK_SKIP_SIBLINGS to d_walk()
  dcache: stop walking siblings if remaining dentries all negative

Stephen Brennan (2):
  dcache: sweep cached negative dentries to the end of list of siblings
  fsnotify: stop walking child dentries if remaining tail is negative

 fs/dcache.c            | 101 +++++++++++++++++++++++++++++++++++++++--
 fs/libfs.c             |   3 ++
 fs/notify/fsnotify.c   |   6 ++-
 include/linux/dcache.h |   6 +++
 4 files changed, 110 insertions(+), 6 deletions(-)

-- 
2.30.2




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux