On Wed, Oct 02, 2024 at 10:34:58PM GMT, Dave Chinner wrote: > On Wed, Oct 02, 2024 at 12:00:01PM +0200, Christian Brauner wrote: > > On Wed, Oct 02, 2024 at 11:33:17AM GMT, Dave Chinner wrote: > > > What do people think of moving towards per-sb inode caching and > > > traversal mechanisms like this? > > > > Patches 1-4 are great cleanups that I would like us to merge even > > independent of the rest. > > Yes, they make it much easier to manage the iteration code. > > > I don't have big conceptual issues with the series otherwise. The only > > thing that makes me a bit uneasy is that we are now providing an api > > that may encourage filesystems to do their own inode caching even if > > they don't really have a need for it just because it's there. So really > > a way that would've solved this issue generically would have been my > > preference. > > Well, that's the problem, isn't it? :/ > > There really isn't a good generic solution for global list access > and management. The dlist stuff kinda works, but it still has > significant overhead and doesn't get rid of spinlock contention > completely because of the lack of locality between list add and > remove operations. There is though; I haven't posted it yet because it still needs some work, but the concept works and performs about the same as dlock-list. https://evilpiepirate.org/git/bcachefs.git/log/?h=fast_list The thing that needs to be sorted before posting is that it can't shrink the radix tree. generic-radix-tree doesn't support shrinking, and I could add that, but then ida doesn't provide a way to query the highest id allocated (xarray doesn't support backwards iteration). So I'm going to try it using idr and see how that performs (idr is not really the right data structure for this, split ida and item radix tree is better, so might end up doing something else). But - this approach with more work will work for the list_lru lock contention as well.