Re: [PATCH] fs: don't scan the inode cache before SB_ACTIVE is set

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Mar 26, 2018 at 05:33:32PM +1100, Dave Chinner wrote:
> > It's potentially racy, though - don't we need a barrier between setting the
> > things up and setting SB_ACTIVE?
> 
> Well, we start with it clear, so it won't be a problem if the
> shrinker races with it being set. I think it's more a problem when
> we clear it, but I'm not sure how much of a problem that is because
> the filesystem structures are still all set up whenever it gets
> cleared.

... except that stores might be reordered, with ->s_flags one observed before
some of the stores that went before it.

> It said, it's no trouble to add a smp_wmb/smp_rmb barriers where
> necessary...
> 
> > And that, BTW, means that we want SB_BORN instead of SB_ACTIVE - unlike the
> > latter, the former is set only in one place.
> 
> Not sure that's the case - lots of filesystems set SB_ACTIVE in
> their mount process to enable iput_final() to cache inodes. That's
> why I chose SB_ACTIVE - it matches when the filesystem starts making
> use of the inode cache and giving the shrinker real work to do....
> 
> <shrug> not fussed - let me know if you still prefer SB_BORN and
> I'll switch it.

I do.  Let it match the places like trylock_super() et.al.



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux