Hi Christoph, On 7 January 2014 17:58, Christoph Hellwig <hch@xxxxxxxxxxxxx> wrote: > On Mon, Jan 06, 2014 at 09:10:32PM +0100, Jan Kara wrote: >> This is likely a problem of Linux direct IO implementation. The thing is >> that in Linux when you are doing appending direct IO (i.e., direct IO which >> changes file size), the IO is performed synchronously so that we have our >> life simpler with inode size update etc. (and frankly our current locking >> rules make inode size update on IO completion almost impossible). Since >> appending direct IO isn't very common, we seem to get away with this >> simplification just fine... > > Shouldn't be too much of a problem at least for XFS and maybe even ext4 > with the workqueue based I/O end handler. For XFS we protect size > updates by the ilock which we already taken in that handler, not sure > what ext4 would do there. > Actually my initial report (14.67Mb/sec 3755.41 Requests/sec) was about ext4 However I have tried XFS as well. It was a bit slower than ext4 on all occasions. On the same machine results for XFS were: 13.97Mb/sec 3576..27 Requests/sec /dev/mapper/mpathc on /mnt/xfs type xfs (rw,noatime,nodiratime,nobarrier) -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html