Re: [RFC PATCH 0/7] vfs: improving inode cache iteration scalability

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Oct 03, 2024 at 09:17:08AM GMT, Dave Chinner wrote:
> On Wed, Oct 02, 2024 at 04:28:35PM -0400, Kent Overstreet wrote:
> > On Wed, Oct 02, 2024 at 12:49:13PM GMT, Linus Torvalds wrote:
> > > On Wed, 2 Oct 2024 at 05:35, Dave Chinner <david@xxxxxxxxxxxxx> wrote:
> > > >
> > > > On Wed, Oct 02, 2024 at 12:00:01PM +0200, Christian Brauner wrote:
> > > >
> > > > > I don't have big conceptual issues with the series otherwise. The only
> > > > > thing that makes me a bit uneasy is that we are now providing an api
> > > > > that may encourage filesystems to do their own inode caching even if
> > > > > they don't really have a need for it just because it's there.  So really
> > > > > a way that would've solved this issue generically would have been my
> > > > > preference.
> > > >
> > > > Well, that's the problem, isn't it? :/
> > > >
> > > > There really isn't a good generic solution for global list access
> > > > and management.  The dlist stuff kinda works, but it still has
> > > > significant overhead and doesn't get rid of spinlock contention
> > > > completely because of the lack of locality between list add and
> > > > remove operations.
> > > 
> > > I much prefer the approach taken in your patch series, to let the
> > > filesystem own the inode list and keeping the old model as the
> > > "default list".
> > > 
> > > In many ways, that is how *most* of the VFS layer works - it exposes
> > > helper functions that the filesystems can use (and most do), but
> > > doesn't force them.
> > > 
> > > Yes, the VFS layer does force some things - you can't avoid using
> > > dentries, for example, because that's literally how the VFS layer
> > > deals with filenames (and things like mounting etc). And honestly, the
> > > VFS layer does a better job of filename caching than any filesystem
> > > really can do, and with the whole UNIX mount model, filenames
> > > fundamentally cross filesystem boundaries anyway.
> > > 
> > > But clearly the VFS layer inode list handling isn't the best it can
> > > be, and unless we can fix that in some fundamental way (and I don't
> > > love the "let's use crazy lists instead of a simple one" models) I do
> > > think that just letting filesystems do their own thing if they have
> > > something better is a good model.
> > 
> > Well, I don't love adding more indirection and callbacks.
> 
> It's way better than open coding inode cache traversals everywhere.

Eh? You had a nice iterator for dlock-list :)

> The callback model is simply "call this function on every object",
> and it allows implementations the freedom to decide how they are
> going to run those callbacks.
> 
> For example, this abstraction allows XFS to parallelise the
> traversal. We currently run the traversal across all inodes in a
> single thread, but now that XFS is walking the inode cache we can
> push each shard off to a workqueue and run each shard concurrently.
> IOWs, we can actually make the traversal of large caches much, much
> faster without changing the semantics of the operation the traversal
> is trying to acheive.
> 
> We simply cannot do things like that without a new iteration model.
> Abstraction is necessary to facilitate a new iteration model, and a
> model that provides independent object callbacks allows scope for
> concurrent processing of individual objects.

Parallelized iteration is a slick possibility.

My concern is that we've been trying to get away from callbacks for
iteration - post spectre they're quite a bit more expensive than
external iterators, and we've generally been successful with that. 

> 
> > The underlying approach in this patchset of "just use the inode hash
> > table if that's available" - that I _do_ like, but this seems like
> > the wrong way to go about it, we're significantly adding to the amount
> > of special purpose "things" filesystems have to do if they want to
> > perform well.
> 
> I've already addressed this in my response to Christian. This is a
> mechanism that allows filesystems to be moved one-by-one to a new
> generic cache and iteration implementation without impacting
> existing code. Once we have that, scalability of the inode cache and
> traversals should not be a reason for filesystems "doing their own
> thing" because the generic infrastructure will be sufficient for
> most filesystem implementations.

Well, I'm not really seeing the need; based on my performance testing
both dlock-list and fast-list completely shift the bottleneck to the
lru_list locking - and in my testing both patchsets were about equal, to
within the margin of error.

Which is a touch surprising, given that dlock-list works similarly to
lru_list - possibly it's because you only have siblings sharing lists
vs. numa nodes for lru lists, or lru scanning is doing more cross
cpu/node accesses.

> > Converting the standard inode hash table to an rhashtable (or more
> > likely, creating a new standard implementation and converting
> > filesystems one at a time) still needs to happen, and then the "use the
> > hash table for iteration" approach could use that without every
> > filesystem having to specialize.
> 
> Yes, but this still doesn't help filesystems like XFS where the
> structure of the inode cache is highly optimised for the specific
> on-disk and in-memory locality of inodes. We aren't going to be
> converting XFS to a rhashtable based inode cache anytime soon
> because it simply doesn't provide the functionality we require.
> e.g. efficient lockless sequential inode number ordered traversal in
> -every- inode cluster writeback operation.

I was going to ask what your requirements are - I may take on the
general purpose inode rhashtable code, although since I'm still pretty
buried we'll see.

Coincidentally, just today I'm working on an issue in bcachefs where
we'd also prefer an ordered data structure to a hash table for the inode
cache - in online fsck, we need to be able to check if an inode is still
open, but checking for an inode in an interior snapshot node means we
have to do a scan and check if any of the open inodes are in a
descendent subvolume.

Radix tree doesn't work for us, since our keys are { inum, subvol } - 96
bits - but it has me considering looking at maple trees (or something
like the lockless RCU btree you were working on awhile back) - those
modern approaches should be approaching hash table performance, if
enough needs for ordered access come up.

> > Failing that, or even regardless, I think we do need either dlock-list
> > or fast-list. "I need some sort of generic list, but fast" is something
> > I've seen come up way too many times.
> 
> There's nothing stopping you from using the dlist patchset for your
> own purposes. It's public code - just make sure you retain the
> correct attributions. :)

If this patchset goes in that might be just what I do, if I don't get
around to finishing fast-list :)




[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux