Re: [PATCH v3 0/3] fsnotify: fix softlockups iterating over d_subdirs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Nov 11, 2022 at 3:12 AM Stephen Brennan
<stephen.s.brennan@xxxxxxxxxx> wrote:
>
> Amir Goldstein <amir73il@xxxxxxxxx> writes:
>
> > On Thu, Nov 10, 2022, 10:25 PM Stephen Brennan <stephen.s.brennan@xxxxxxxxxx>
> > wrote:
> >
> >> Amir Goldstein <amir73il@xxxxxxxxx> writes:
> >> [...]
> >> > IIUC, patches #1+#3 fix a reproducible softlock, so if Al approves them,
> >> > I see no reason to withhold.
> >> >
> >> > With patches #1+#3 applied, the penalty is restricted to adding or
> >> > removing/destroying multiple marks on very large dirs or dirs with
> >> > many negative dentries.
> >> >
> >> > I think that fixing the synthetic test case of multiple added marks
> >> > is rather easy even without DCACHE_FSNOTIFY_PARENT_WATCHED.
> >> > Something like the attached patch is what Jan had suggested in the
> >> > first place.
> >> >
> >> > The synthetic test case of concurrent add/remove watch on the same
> >> > dir may still result in multiple children iterations, but that will
> >> usually
> >> > not be avoided even with DCACHE_FSNOTIFY_PARENT_WATCHED
> >> > and probably not worth optimizing for.
> >> >
> >> > Thoughts?
> >>
> >> If I understand your train of thought, your below patch would take the
> >> place of my patch #2, since #3 requires we not hold a spinlock during
> >> fsnotify_update_children_dentry_flags().
> >>
> >> And since you fully rely on dentry accessors to lazily clear flags, you
> >> don't need to have the mutual exclusion provided by the inode lock.
> >>
> >
> > You don't need it to synchronize set and clear of all children, but inode
> > lock is still useful for the update of children.
> >
> >
> >> I like that a lot.
> >>
> >> However, the one thing I see here is that if we no longer hold the inode
> >> lock, patch #3 needs to be re-examined: it assumes that dentries can't
> >> be removed or moved around, and that assumption is only true with the
> >> inode lock held.
> >>
> >
> > I wasn't planning on dropping that assumption
> > I was assuming that another patch would replace the spinlock with inode
> > lock.
> > As you had planned to do to avoid softlocks
> > When adding marks.
>
> Ah, thanks for the clarity! I believe I was working to that conclusion
> myself too.
>
> I actually believe I can make patch #3 work even when inode lock is not
> held, using the d_parent+retry check, and ensuring we skip cursors

I believe that would be a textbook premature optimization.
With all the patches planned, what would be the problem solved by
not holding inode lock?

> before cond_resched(). I was hoping to have a tested series ready by the
> end of my workday, but alas, it wasn't in the cards. I've got some
> builds running, which I'll test on my LTP machine in the morning. The
> overall idea is:
>
> fsnotify: clear PARENT_WATCHED flags lazily
>   -> As you sent it, no changes
> fsnotify: Use d_find_any_alias to get dentry associated with inode
>   -> My patch #1, rebased
> dnotify: move fsnotify_recalc_mask() outside spinlock
>   -> New patch to ensure we don't sleep while holding dnotify spnilock
> fsnotify: allow sleepable child flag update
>   -> My patch #3, but working even without inode lock held
> fsnotify: require inode lock held during child flag update
>   -> An optional additional patch to require the inode lock to be held,
>      just like you described.
>      The main benefit of having this (assuming my changes to my patch #3
>      hold out under scrutiny/testing) would be that it prevents the
>      situation where another task can do an fsnotify_recalc_mask(), see
>      that the flags don't need an update, and return before a different
>      task actually finishes the child flag update.

Assuming that you figured out the lock ordering and fixed the
dnotify locking, I just don't see pros in letting go of inode lock
only cons.

At the most, you can defer getting rid of inode lock to a later time
after the dust has settled on the other changes, but again, not
sure for what purpose.

I cannot see how holding inode lock could in any way regress the
situation from the current state of holding the spinlock.

Thanks,
Amir.



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux