On Thu, Oct 06, 2011 at 09:55:07PM +0200, Bernhard Schmidt wrote: > Hi, > > this is an XFS-related summary of a problem report I sent to the > postfix mailinglist a few minutes ago after a bulkmail test system > blew up during a stress test. > > We have a few MTAs running SLES11.1 amd64 (2.6.32.45-0.3-default), > 10 GB XFS Spooldirectory with default blocksize (4k). It was > bombarded with mails faster than it could send them on, which > eventually led to almost 2 million files of ~1.5kB in one directory. > Suddenly, this started to happen > > lxmhs45:/var/spool/postfix-bulk/postfix-bulkinhss # touch a > touch: cannot touch `a': No space left on device > lxmhs45:/var/spool/postfix-bulk/postfix-bulkinhss # df . > Filesystem 1K-blocks Used Available Use% Mounted on > /dev/sdb 10475520 7471160 3004360 72% So you have a 10GB filesystem, with about 3GB of free space. > /var/spool/postfix-bulk > lxmhs45:/var/spool/postfix-bulk/postfix-bulkinhss # df -i . > Filesystem Inodes IUsed IFree IUse% Mounted on > /dev/sdb 10485760 1742528 8743232 17% /var/spool/postfix-bulk And with 1.7 million inodes in it. That's a lot for a tiny filesystem, and not really a use case that XFS is well suited to. XFS will work, but it won't age gracefully under these conditions... As it is, your problem is most likely fragmented free space (an aging problem). Inodes are allocated in chunks of 64, so require an -aligned- contiguous 16k extent for the default 256 byte inode size. If you have no aligned contiguous 16k extents free then inode allocation will fail. Running 'xfs_db -r "-c freesp -s" /dev/sdb' will give you a histogram of free space extents in the filesystem, which will tell us if you are hitting this problem. Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs