Re: Guest sync write iops so poor.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



rbd engine with fsync=1 seems stuck.
Jobs: 1 (f=1): [w(1)] [0.0% done] [0KB/0KB/0KB /s] [0/0/0 iops] [eta 1244d:10h:39m:18s]

But fio using /dev/rbd0 sync=1 direct=1 ioengine=libaio iodepth=64, get very high iops ~35K, similar to direct wirte.

I'm confused with that result, IMHO, ceph could just ignore the sync cache command since it always use sync write to journal, right?

Why we get so bad sync iops, how ceph handle it?
Very appreciated to get your reply!

2016-02-25 22:44 GMT+08:00 Jason Dillaman <dillaman@xxxxxxxxxx>:
> 35K IOPS with ioengine=rbd sounds like the "sync=1" option doesn't actually
> work. Or it's not touching the same object (but I wonder whether write
> ordering is preserved at that rate?).

The fio rbd engine does not support "sync=1"; however, it should support "fsync=1" to accomplish roughly the same effect.

Jason

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux