rbd_aio_flush cause guestos sync wirte poor iops?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
   We test sync iops with fio sync=1 for database workloads in VM,
the backend is librbd and ceph (all SSD setup).'
   The result is sad to me. we only get ~400 IOPS sync randwrite with iodepth=1
to iodepth=32.
    But test in physical machine with fio ioengine=rbd sync=1, we can
reache ~35K IOPS.
seems the qemu rbd is the bottleneck.

    qemu version is 2.1.2 with rbd_aio_flush patched.
    rbd cache is off, qemu cache=none.

    IMHO, ceph use sync write for every write to disk, so
rbd_aio_flush can ignore the sync
cache command if rbd cache is off so that we can get higher
iops(similar to direct=1 write)
for sync=1 iops, right?

   Very appreciated to get your reply!
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux