On Tue, Oct 13, 2015 at 07:28:48AM +0000, Al Lau (alau2) wrote: > Are the xfs_db and filefrag the utilities to use to determine file fragmentation? > > # df -k /var/kmem_alloc > Filesystem 1K-blocks Used Available Use% Mounted on > /dev/sdf1 3905109820 3359385616 545724204 87% /var/kmem_alloc > # xfs_db -r -c frag /dev/sdf1 > actual 438970, ideal 388168, fragmentation factor 11.57% http://xfs.org/index.php/XFS_FAQ#Q:_The_xfs_db_.22frag.22_command_says_I.27m_over_50.25._Is_that_bad.3F > # ls -l fragmented > -rw-r--r--. 1 root root 3360239878657 Oct 13 07:25 fragmented > # filefrag fragmented > fragmented: 385533 extents found That's a lot of extents, but for a 3TB sparse file that is being written in random 4k blocks, that's expected and there's little you can do about it. Preallocation of the file or use of extent size hints will reduce physical fragmentation, but you only want to use those if the file will eventually become non-sparse and sequential read IO performance is required... i.e. the definition of "fragmented" really depends on the application, IO patterns and whether the current physical layout is acheiving the desired performance attributes of the file in question.... Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs