Re: [PATCH 0/6] kernfs: proposed locking and concurrency improvement

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 2021-01-11 at 15:04 +0800, Fox Chen wrote:
> On Mon, Jan 11, 2021 at 12:20 PM Ian Kent <raven@xxxxxxxxxx> wrote:
> > On Mon, 2021-01-11 at 11:19 +0800, Ian Kent wrote:
> > > On Wed, 2021-01-06 at 10:38 +0800, Fox Chen wrote:
> > > > Hi Ian,
> > > > 
> > > > I am rethinking this problem. Can we simply use a global lock?
> > > > 
> > > >  In your original patch 5, you have a global mutex attr_mutex
> > > > to
> > > > protect attr, if we change it to a rwsem, is it enough to
> > > > protect
> > > > both
> > > > inode and attr while having the concurrent read ability?
> > > > 
> > > > like this patch I submitted. ( clearly, I missed
> > > > __kernfs_iattrs
> > > > part,
> > > > but just about that idea )
> > > > https://lore.kernel.org/lkml/20201207084333.179132-1-foxhlchen@xxxxxxxxx/
> > > 
> > > I don't think so.
> > > 
> > > kernfs_refresh_inode() writes to the inode so taking a read lock
> > > will allow multiple processes to concurrently update it which is
> > > what we need to avoid.
> 
> Oh, got it. I missed the inode part. my bad. :(
> 
> > > It's possibly even more interesting.
> > > 
> > > For example, kernfs_iop_rmdir() and kernfs_iop_mkdir() might
> > > alter
> > > the inode link count (I don't know if that would be the sort of
> > > thing
> > > they would do but kernfs can't possibly know either). Both of
> > > these
> > > functions rely on the VFS locking for exclusion but the inode
> > > link
> > > count is updated in kernfs_refresh_inode() too.
> > > 
> > > That's the case now, without any patches.
> > 
> > So it's not so easy to get the inode from just the kernfs object
> > so these probably aren't a problem ...
> 
> IIUC only when dop->revalidate, iop->lookup being called, the result
> of rmdir/mkdir will be sync with vfs.

Don't quite get what you mean here?

Do you mean something like, VFS objects are created on user access
to the file system. Given that user access generally means path
resolution possibly followed by some operation.

I guess those VFS objects will go away some time after the access
but even thought the code looks like that should happen pretty
quickly after I've observed that these objects stay around longer
than expected. There wouldn't be any use in maintaining a least
recently used list of dentry candidates eligible to discard.

> 
> kernfs_node is detached from vfs inode/dentry to save ram.
> 
> > > I'm not entirely sure what's going on in kernfs_refresh_inode().
> > > 
> > > It could be as simple as being called with a NULL inode because
> > > the dentry concerned is negative at that point. I haven't had
> > > time to look closely at it TBH but I have been thinking about it.
> 
> um, It shouldn't be called with a NULL inode, right?
> 
> inode->i_mode = kn->mode;
> 
> otherwise will crash.

Yes, you're right about that.

> 
> > Certainly this can be called without a struct iattr having been
> > allocated ... and given it probably needs to remain a pointer
> > rather than embedded in the node the inode link count update
> > can't easily be protected from concurrent updates.
> > 
> > If it was ok to do the allocation at inode creation the problem
> > becomes much simpler to resolve but I thought there were concerns
> > about ram consumption (although I don't think that was exactly what
> > was said?).
> > 
> 
> you meant iattr to be allocated at inode creation time??
> yes, I think so. it's due to ram consumption.

I did, yes.

The actual problem is dealing with multiple concurrent updates to
the inode link count, the rest can work.

Ian





[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux