> On what version of kernel & what version of xfsprogs? We’re on CentOS 7: 3.10.0-693.2.2.el7.x86_64 xfsprogs.x86_64 4.5.0-12.el7 >> The application opens the file, writes the data (single call), and closes the file. In addition there are a few lines added to the extended attributes. The filesystem ends up with 18 to 20 million files when the drive is full. The files are currently spread over 128x128 directories using a hash of the filename. > > It's not uncommon for application filename hashing like this to be less efficient than the internal xfs directory algorithms, FWIW. Good to know. We originally had 256x256 and changed to 128x128 to see if it would help. I figured 18M files in 1 directory wasn’t ideal though. >> The format command I'm using: >> >> mkfs.xfs -f -i size=1024 ${DRIVE} > > Why 1k inodes? Our extended attributes average ~200B .. figured a little extra room wouldn’t hurt? Example: getfattr -d 3c75666a3279623367406b79633479346c777a2e6c706f7471696e3e # file: 3c75666a3279623367406b79633479346c777a2e6c706f7471696e3e user.offset="682" user.crc="1911595230" user.date="1506918540" user.id="f97800a5-66cd-4cb1-9a95-796ae0e8871e" user.inserted="1506918595" user.size="793169" >> Mount options: >> >> rw,noatime,attr2,inode64,allocsize=2048k,logbufs=8,logbsize=256k,noquota > > Why all these options? Started with defaults.. started adding options trying to resolve the issue ;) > Perhaps it's taking time reading through a large custom-hashed directory tree? I don't know what that custom directory layout might look like. It’s currently 128 dirs, each with 128 in them. > Have you tried starting with defaults, and working your way up from there (if needed?) Yep. I’m so used to XFS “just working” that I started trying a lot of options after searching for solutions. Lots of suggestions out there. ;) -R. Jason Adams -- To unsubscribe from this list: send the line "unsubscribe linux-xfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html