On Thu 24-03-22 21:17:09, Amir Goldstein wrote: > On Wed, Mar 23, 2022 at 5:46 PM Amir Goldstein <amir73il@xxxxxxxxx> wrote: > > > > On Wed, Mar 23, 2022 at 4:28 PM Jan Kara <jack@xxxxxxx> wrote: > > > > > > On Wed 23-03-22 16:00:30, Amir Goldstein wrote: > > > > > Well, but reclaim from kswapd is always the main and preferred source of > > > > > memory reclaim. And we will kick kswapd to do work if we are running out of > > > > > memory. Doing direct filesystem slab reclaim from mark allocation is useful > > > > > only to throttle possibly aggressive mark allocations to the speed of > > > > > reclaim (instead of getting ENOMEM). So I'm still not convinced this is a > > > > > big issue but I certainly won't stop you from implementing more fine > > > > > grained GFP mode selection and lockdep annotations if you want to go that > > > > > way :). > > > > > > > > Well it was just two lines of code to annotate the fanotify mutex as its own > > > > class, so I just did that: > > > > > > > > https://github.com/amir73il/linux/commit/7b4b6e2c0bd1942cd130e9202c4b187a8fb468c6 > > > > > > But this implicitely assumes there isn't any allocation under mark_mutex > > > anywhere else where it is held. Which is likely true (I didn't check) but > > > it is kind of fragile. So I was rather imagining we would have per-group > > > "NOFS" flag and fsnotify_group_lock/unlock() would call > > > memalloc_nofs_save() based on the flag. And we would use > > > fsnotify_group_lock/unlock() uniformly across the whole fsnotify codebase. > > > > > > > I see what you mean, but looking at the code it seems quite a bit of churn to go > > over all the old backends and convert the locks to use wrappers where we "know" > > those backends are fs reclaim safe (because we did not get reports of deadlocks > > over the decades they existed). > > > > I think I will sleep better with a conversion to three flavors: > > > > 1. pflags = fsnotify_group_nofs_lock(fanotify_group); > > 2. fsnotify_group_lock(dnotify_group) => > > WARN_ON_ONCE(group->flags & FSNOTIFY_NOFS) > > mutex_lock(&group->mark_mutex) > > 3. fsnotify_group_lock_nested(group) => > > mutex_lock_nested(&group->mark_mutex, SINGLE_DEPTH_NESTING) > > > > I think I might have misunderstood you and you meant that the > SINGLE_DEPTH_NESTING subcalls should be eliminated and then > we are left with two lock classes. > Correct? Yeah, at least at this point I don't see a good reason for using SINGLE_DEPTH_NESTING lockdep annotation. In my opinion it has just a potential of silencing reports of real locking problems. So removing it and seeing what complains would be IMO a way to go. Honza -- Jan Kara <jack@xxxxxxxx> SUSE Labs, CR