On Mon, Sep 23, 2024 at 07:26:31PM GMT, Linus Torvalds wrote: > On Mon, 23 Sept 2024 at 17:27, Dave Chinner <david@xxxxxxxxxxxxx> wrote: > > > > However, the problematic workload is cold cache operations where > > the dentry cache repeatedly misses. This places all the operational > > concurrency directly on the inode hash as new inodes are inserted > > into the hash. Add memory reclaim and that adds contention as it > > removes inodes from the hash on eviction. > > Yeah, and then we spend all the time just adding the inodes to the > hashes, and probably fairly seldom use them. Oh well. > > And I had missed the issue with PREEMPT_RT and the fact that right now > the inode hash lock is outside the inode lock, which is problematic. > > So it's all a bit nasty. > > But I also assume most of the bad issues end up mainly showing up on > just fairly synthetic benchmarks with ramdisks, because even with a > good SSD I suspect the IO for the cold cache would still dominate? Not for bcachefs, because filling into the vfs inode cache doesn't require a disk read - they're cached in the inodes btree and much smaller there. We use a varint encoding so they're typically 50-100 bytes, last I checked. Compare to ~1k, plus or minus, in the vfs inode cache. Thomas Bertshinger has been working on applications at LANL where avoiding pulling into the vfs inode cache seems to make a significant difference (file indexing in his case) - it turns out there's an xattr syscall that's missing, which I believe he'll be submitting a patch for. But stat/statx always pulls into the vfs inode cache, and that's likely worth fixing.