Re: RAID 5,6 sequential writing seems slower in newer kernels

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/02/2015 07:12 PM, Dallas Clement wrote:
> All measurements computed from bandwidth averages taken on 12 disk
> array with XFS filesytem using fio with direct=1, sync=1,
> invalidate=1.

Why do you need direct=1 and sync=1 ?  Have you checked an strace from
the app you are trying to model that shows it uses these?

> Seems incredulous!?

Not with those options.  Particularly sync=1.  That causes an inode
stats update and a hardware queue flush after every write operation.
Support for that on various devices has changed over time.

I suspect if you do a bisect on the kernel to pinpoint the change(s)
that is doing this, you'll find a patch that closes a device-specific or
filesystem sync bug or something that enables deep queues for a device.

Modern software that needs file integrity guarantees make sparse use of
fdatasync and/or fsync and avoid sync entirely.  You'll have a more
believable test if you use fsync_on_close=1 or end_fsync=1.

Phil
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux