On Sun, May 06, 2012 at 11:01:14AM +0200, Stefan Priebe wrote: > Hi, > > since a few days i've experienced a really slow fs on one of our > backup systems. > > I'm not sure whether this is XFS related or related to the > Controller / Disks. > > It is a raid 10 of 20 SATA Disks and i can only write to them with > about 700kb/s while doing random i/o. What sort of random IO? size, read, write, direct or buffered, data or metadata, etc? iostat -x -d -m 5 and vmstat 5 traces would be useful to see if it is your array that is slow..... > I tried vanilla Kernel 3.0.30 > and 3.3.4 - no difference. Writing to another partition on another > xfs array works fine. > > Details: > #~ df -h > /dev/sdb1 4,6T 4,4T 207G 96% /mnt Your filesystem is near full - the allocation algorithms definitely slow down as you approach ENOSPC, and IO efficiency goes to hell because of a lack of contiguous free space to allocate from. > #~ df -i > /dev/sdb1 4875737052 4659318044 216419008 96% /mnt You have 4.6 *billion* inodes in your filesystem? > Any ideas? None until I understand your workload.... Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs