In an effort to maximize mbox performance and minimize fragmentation I'm using these mount options in fstab. This is on a Debian Lenny box but with vanilla 2.6.34.1 rolled from kernel.org source (Lenny ships with 2.6.26). xfsprogs is 2.9.8. /dev/sda6 /home xfs defaults,logbufs=8,logbsize=256k,allocsize=1m Since the actual XFS mount defaults aren't consistently published anywhere that I can find I'm manually specifying logbufs and logbsize. I added allocsize=1m as my read of the man page suggests this will preallocate an additional 1MB of extent space at the end of each mbox file each time it is written, which I would think should eliminate fragmentation of these files. However, this doesn't seem to be eliminating the fragmentation. I added allocsize=1m at a date after all of the mbox files in question already existed. Does allocsize=1m only affect new files or does it preallocate at the end of existing files? I've probably totally misread what allocsize= actually does. Please educate me. If allocsize= doesn't help prevent fragmentation of mbox files, what can I do to mitigate this, other than regularly running xfs_fsr? Filesystem Type Size Used Avail Use% Mounted on /dev/sda6 xfs 94G 1.3G 92G 2% /home Filesystem Type Inodes IUsed IFree IUse% Mounted on /dev/sda6 xfs 94M 1.1K 94M 1% /home actual 1096, ideal 1011, fragmentation factor 7.76% I've recently run xfs_fsr thus the low 7% figure. Every couple of weeks the fragmentation reaches ~30%. I save a lot of list mail, dozens to hundreds per day for each of about 7 foss mailing lists. As I say, in just a couple of weeks, these mbox files become fragmented, on the order of a dozen to a few dozen extents per mbox file. With so much free space available on this filesystem, why aren't/weren't these files being spread out with sufficient space between them to prevent fragmentation? P.S. (Dave or someone has suggested on list that with newer kernels the defaults for these two (and other) mount options do not match those suggested in the man pages. I requested a feature some time ago that would actually put these default values in /proc files to eliminate any doubt as to what the actual defaults being used are. I don't recall if anything came of this. I've seen many an OP get "scolded" on list for manually specifying values that were apparently equal to the "default" values as stated by the responding dev. This problem would never exist if the documentation was complete and consistent. If it already is, then something is wrong, as myself and hoards of other OPs aren't able to locate this definitive information regarding mount defaults.) -- Stan _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs