On Fri, Feb 03, 2012 at 09:01:14PM +0000, Brian Candler wrote: > On Fri, Feb 03, 2012 at 02:03:04PM -0500, Christoph Hellwig wrote: > > > With defaults, the files in one directory are spread all over the > > > filesystem. But with -i size=1024, the files in a directory are stored > > > adjacent to each other. Hence reading all the files in one directory > > > requires far less seeking across the disk, and runs about 3 times faster. > > > > Not sure if you mentioned it somewhere before, but: > > > > a) how large is the filesystem? > > 3TB. > > > b) do use the inode64 mount option > > No: the only mount options I've given are noatime,nodiratime. > > > c) can you see the same good behaviour when using inode64 and small > > inodes (not that inode64 can NOT be set using remount) > > I created a fresh filesystem (/dev/sdh), default parameters, but mounted it > with inode64. Then I tar'd across my corpus of 100K files. Result: files > are located close to the directories they belong to, and read performance > zooms. > > So I conclude that XFS *does* try to keep file extents close to the > enclosing directory, but was being thwarted by the limitations of 32-bit > inodes. > > There is a comment "performance sucks" at: > http://xfs.org/index.php/XFS_FAQ#Q:_What_is_the_inode64_mount_option_for.3F > > However, there it talks about files [extents?] being located close to their > inodes, rather than file extents being located close to their parent > directory. With inode64, inodes are located close to their parent directories' inode, and file extent allocation is close to the owner's inode. Hence file extent allocation is close to the parent directory inode, too. Directory inodes are where the locality changes - each new subdir is placed in a different AG, with the above behaviour you get per directory locality with inode64. Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs