Re: Terrible performance of sequential O_DIRECT 4k writes in SAN environment. ~3 times slower then Solars 10 with the same HBA/Storage.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Christoph,

On 7 January 2014 17:58, Christoph Hellwig <hch@xxxxxxxxxxxxx> wrote:
> On Mon, Jan 06, 2014 at 09:10:32PM +0100, Jan Kara wrote:
>>   This is likely a problem of Linux direct IO implementation. The thing is
>> that in Linux when you are doing appending direct IO (i.e., direct IO which
>> changes file size), the IO is performed synchronously so that we have our
>> life simpler with inode size update etc. (and frankly our current locking
>> rules make inode size update on IO completion almost impossible). Since
>> appending direct IO isn't very common, we seem to get away with this
>> simplification just fine...
>
> Shouldn't be too much of a problem at least for XFS and maybe even ext4
> with the workqueue based I/O end handler.  For XFS we protect size
> updates by the ilock which we already taken in that handler, not sure
> what ext4 would do there.
>

Actually my initial report (14.67Mb/sec  3755.41 Requests/sec) was about ext4
However I have tried XFS as well. It was a bit slower than ext4 on all
occasions.
On the same machine results for XFS were:

    13.97Mb/sec  3576..27 Requests/sec

/dev/mapper/mpathc on /mnt/xfs type xfs (rw,noatime,nodiratime,nobarrier)
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]
  Powered by Linux