This grew out of the discussion in my other thread ("Abysmal write performance because of excessive seeking (allocation groups to blame?)") -- that should in fact have been called "Free space fragmentation causes excessive seeks". Could someone with a good hardware RAID (5 or 6, but also mirrored setups would be interesting) please conduct a little experiment for me? I've put up a modified sysbench here: <https://github.com/Ringdingcoder/sysbench>. This tries to simulate the write pattern I've seen with XFS. It would be really interesting to know how different RAID controllers cope with this. - Checkout (or download tarball): https://github.com/Ringdingcoder/sysbench/tarball/master - ./configure --without-mysql && make - fallocate -l 8g test_file.0 - ./sysbench/sysbench --test=fileio --max-time=15 --max-requests=10000000 --file-num=1 --file-extra-flags=direct --file-total-size=8G --file-block-size=8192 --file-fsync-all=off --file-fsync-freq=0 --file-fsync-mode=fdatasync --num-threads=1 --file-test-mode=ag4 run If you don't have fallocate, you can also use the last line with "run" replaced by "prepare" to create the file. Run the benchmark a few times to check if the numbers are somewhat stable. When doing a few runs in direct succession, the first one will likely be faster because the cache has not been loaded up yet. The interesting part of the output is this: Read 0b Written 64.516Mb Total transferred 64.516Mb (4.301Mb/sec) 550.53 Requests/sec executed That's a measurement from my troubled RAID 6 volume (SmartArray P400, 6x 10k disks). >From the other controller in this machine (RAID 1, SmartArray P410i, 2x 15k disks), I get: Read 0b Written 276.85Mb Total transferred 276.85Mb (18.447Mb/sec) 2361.21 Requests/sec executed The better result might be caused by the better controller or the RAID 1, with the latter reason being more likely. Regards, Stefan _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs