Why is O_DSYNC on linux so slow / what's wrong with my SSD?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

while struggling about an application beeing so slow on my SSD and
having high I/O Waits while the app is using the raw block device i've
detected that this is caused by open the block device with O_DSYNC.

I've used dd and fio with oflags=direct,dsync / --direct=1 and --sync=1

and got these "strange" results:

fio --sync=1:
WRITE: io=1694.0MB, aggrb=57806KB/s, minb=57806KB/s, maxb=57806KB/s,
mint=30008msec, maxt=30008msec

fio --sync=0:
WRITE: io=5978.0MB, aggrb=204021KB/s, minb=204021KB/s, maxb=204021KB/s,
mint=30004msec, maxt=30004msec

I get the same results on a crucial m4 as on my intel 530 ssd.

I also tried the same under FreeBSD 9.1 which shows around the same
results for sync=0 as sync=1:

sync=0:
WRITE: io=5984.0MB, aggrb=204185KB/s, minb=204185KB/s, maxb=204185KB/s,
mint=30010msec, maxt=30010msec

sync=1:
WRITE: io=5843.0MB, aggrb=199414KB/s, minb=199414KB/s, maxb=199414KB/s,
mint=30004msec, maxt=30004msec

Can anyone explain to me why O_DSYNC for my app on linux is so slow?

used kernel is vanilla 3.10.19

Thanks!


Greets Stefan
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux