On Thu, Nov 09, 2023 at 06:20:44AM +0000, Al Viro wrote: > Currently the "need caller to do more work" path in fast_dput() > has refcount decremented, then, with ->d_lock held and > refcount verified to have reached 0 fast_dput() forcibly resets > the refcount to 1. > > Move that resetting refcount to 1 into the callers; later in > the series it will be massaged out of existence. > > Signed-off-by: Al Viro <viro@xxxxxxxxxxxxxxxxxx> > --- Ok, this is safe to do because of [PATCH 09/22] fast_dput(): handle underflows gracefully https://lore.kernel.org/linux-fsdevel/20231109062056.3181775-9-viro@xxxxxxxxxxxxxxxxxx as return false from fast_dput() now always means refcount is zero. Reviewed-by: Christian Brauner <brauner@xxxxxxxxxx> > fs/dcache.c | 9 ++------- > 1 file changed, 2 insertions(+), 7 deletions(-) > > diff --git a/fs/dcache.c b/fs/dcache.c > index e02b3c81bc02..9a3eeee02500 100644 > --- a/fs/dcache.c > +++ b/fs/dcache.c > @@ -847,13 +847,6 @@ static inline bool fast_dput(struct dentry *dentry) > spin_unlock(&dentry->d_lock); > return true; > } > - > - /* > - * Re-get the reference we optimistically dropped. We hold the > - * lock, and we just tested that it was zero, so we can just > - * set it to 1. > - */ > - dentry->d_lockref.count = 1; > return false; > } > > @@ -896,6 +889,7 @@ void dput(struct dentry *dentry) > } > > /* Slow case: now with the dentry lock held */ > + dentry->d_lockref.count = 1; > rcu_read_unlock(); > > if (likely(retain_dentry(dentry))) { > @@ -930,6 +924,7 @@ void dput_to_list(struct dentry *dentry, struct list_head *list) > return; > } > rcu_read_unlock(); > + dentry->d_lockref.count = 1; > if (!retain_dentry(dentry)) > __dput_to_list(dentry, list); > spin_unlock(&dentry->d_lock); > -- > 2.39.2 >