On Tue, Feb 22, 2022 at 01:24:50PM +1100, NeilBrown wrote: > > Hi Al, > I wonder if you might find time to have a look at this patch. It > allows concurrent updates to a single directory. This can result in > substantial throughput improvements when the application uses multiple > threads to create lots of files in the one directory, and there is > noticeable per-create latency, as there can be with NFS to a remote > server. > Thanks, > NeilBrown > > Some filesystems can support parallel modifications to a directory, > either because the modification happen on a remote server which does its > own locking (e.g. NFS) or because they can internally lock just a part > of a directory (e.g. many local filesystems, with a bit of work - the > lustre project has patches for ext4 to support concurrent updates). > > To allow this, we introduce VFS support for parallel modification: > unlink (including rmdir) and create. Parallel rename is not (yet) > supported. Yay! > If a filesystem supports parallel modification in a given directory, it > sets S_PAR_UNLINK on the inode for that directory. lookup_open() and > the new lookup_hash_modify() (similar to __lookup_hash()) notice the > flag and take a shared lock on the directory, and rely on a lock-bit in > d_flags, much like parallel lookup relies on DCACHE_PAR_LOOKUP. I suspect that you could enable this for XFS right now. XFS has internal directory inode locking that should serialise all reads and writes correctly regardless of what the VFS does. So while the VFS might use concurrent updates (e.g. inode_lock_shared() instead of inode_lock() on the dir inode), XFS has an internal metadata lock that will then serialise the concurrent VFS directory modifications correctly.... Yeah, I know, this isn't true concurrent dir updates, but it should allow multiple implementations of the concurrent dir update VFS APIs across multiple filesystems and shake out any assumptions that might arise from a single implementation target (e.g. silly rename quirks). Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx