On Sat, Jan 30, 2016 at 11:43:56AM +0100, Christian Affolter wrote: > Hi Dave, > > On 29.01.2016 23:25, Dave Chinner wrote: > > On Fri, Jan 29, 2016 at 11:53:35AM +0100, Christian Affolter wrote: > >> Hi everyone, > >> > >> I'm trying to understand the differences of some bandwidth and IOPs test > >> results I see while running a random-write full-stripe-width aligned fio > >> test (using libaio with direct IO) on a hardware RAID 6 raw device > >> versus on the same device with the XFS file system on top of it. > >> > >> On the raw device I get: > >> write: io=24828MB, bw=423132KB/s, iops=137, runt= 60085msec > >> > >> With XFS on top of it: > >> write: io=14658MB, bw=249407KB/s, iops=81, runt= 60182msec > > > > Now repeat with a file that is contiguously allocated before you > > start. And also perhaps with the "swalloc" mount option. > > Wow, thanks! After specifying --fallocate=none (instead of the default > fallocate=posix), bandwidth and iops increases and are even higher than > on the raw device: > > write: io=30720MB, bw=599232KB/s, iops=195, runt= 52496msec > > I'm eager to learn what's going on behind the scenes, can you give a > short explanation? Usually when concurrent direct IO writes are slower than the raw device it's because something is causing IO submission serialisation. Usually that's to do with writes that extend the file because that can require the inode to be locked exclusively. Whatever behaviour the fio configuration change modifed, it removed the IO submission serialisation and so it's now running at full disk speed. As to why XFS is faster than the raw block device, the XFS file is only 30GB, so the random writes are only seeking a short distance compared to the block device test which is seeking across the whole device. > Btw. mounting the volume with "swalloc" didn't make any change. Which means there is no performance differential between stripe unit and stripe width aligned writes in this test on your hardware. Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs