> > With 4k directory block size and your write heavy workload, you > could get away with just 10 directories. However, it'd probably be > better to use a single level 100-directory wide hash to bring to > down to less than 200k files per directory…. Moved over to single level with 100 directories. > Small files should be a single extent, so there's heaps of room for > a 200 byte xattr in the inode. using 512 byte inodes will half > memory demand for caching inode buffers…. Moved to 512 byte inodes. > In general, use the defaults and don't add anything extra unless you > know it solves a specific problem you've witnessed in testing… Moved to the defaults. > > Most likely going to be metadata writeback of inode buffers > requiring RMW based on experience with gluster and ceph having > exactly the same problems. Use blktrace to identify what the reads > are, and see if those same blocks are written later on. An io marked > a "M" is a metadata IO. Post the blktrace output of the bits you > find relevant. Reformated the drive and it's refilling. With the changes suggested (100 dir, 512 nodes, defaults) it already seems better. We’re currently at 6% full and the reads are quite a bit less than they were before at similar fullness. One thing I’m noticing in Grafana, the read request/s seem to keep increasing (up to ~8/s) for around an 15 minutes, then they drop down 1/s for 10-15 minutes.. then over the next 15 minutes they build back up.. etc > FWIW, how much RAM do you have in the system, and what does 'echo > 200 > /proc/sys/fs/xfs/xfssyncd_centisecs' do to the behaviour? System has 24G of ram. I’m guessing a move to 96 or 192G would help a lot.. in the end the system will have 36 of these 10TB drives. I want to thank you and Eric for the time you’ve taken to help. Feels good to make some progress on this issue. -R. Jason Adams -- To unsubscribe from this list: send the line "unsubscribe linux-xfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html