Re: [PATCH 09/15] fold the call of retain_dentry() into fast_dput()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Nov 01, 2023 at 06:20:58AM +0000, Al Viro wrote:
> Calls of retain_dentry() happen immediately after getting false
> from fast_dput() and getting true from retain_dentry() is
> treated the same way as non-zero refcount would be treated by
> fast_dput() - unlock dentry and bugger off.
> 
> Doing that in fast_dput() itself is simpler.

FWIW, I wonder if it would be better to reorganize it a bit -

// in some cases we can show that retain_dentry() would return true
// without having to take ->d_lock
< your comments regarding that part go here>
static inline bool lockless_retain_dentry(struct dentry *dentry)
{
        unsigned int d_flags;

        smp_rmb();
        d_flags = READ_ONCE(dentry->d_flags);
        d_flags &= DCACHE_REFERENCED | DCACHE_LRU_LIST | DCACHE_OP_DELETE |
                        DCACHE_DISCONNECTED | DCACHE_DONTCACHE;

        /* Nothing to do? Dropping the reference was all we needed? */
        if (d_flags == (DCACHE_REFERENCED | DCACHE_LRU_LIST) && !d_unhashed(dentry))
                return true;
	return false;
}

and fast_dput() becomes

{
        int ret;
	// try to do decrement locklessly
	ret = lockref_put_return(&dentry->d_lockref);
	if (likely(ret >= 0)) {
		// could we show that full check would succeed?
		if (ret || lockless_retain_dentry(dentry))
			return true;
		// too bad, have to lock it and do full variant
		spin_lock(&dentry->d_lock);
	} else {
		// failed, no chance to avoid grabbing lock
                spin_lock(&dentry->d_lock);
		// underflow?  whine and back off - we are done
                if (WARN_ON_ONCE(dentry->d_lockref.count <= 0)) {
                        spin_unlock(&dentry->d_lock);
                        return true;
                }
		// decrement it under lock, then...
                dentry->d_lockref.count--;
        }
	// full check it is...
        if (dentry->d_lockref.count || retain_dentry(dentry)) {
                spin_unlock(&dentry->d_lock);
                return true;
        }
        return false;
}

Might be easier to follow that way...  Another thing: would you mind

#if USE_CMPXCHG_LOCKREF
extern int lockref_put_return(struct lockref *);
#else
static inline int lockref_put_return(struct lockref *l)
{
	return -1;
}
#endif

in include/linux/lockref.h?  Would be useful on DEBUG_SPINLOCK configs...




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux