Guest sync write iops so poor.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
   We test sync iops with fio sync=1 for database workloads in VM,
the backend is librbd and ceph (all SSD setup).

   The result is sad to me. we only get ~400 IOPS sync randwrite with iodepth=1
to iodepth=32.

   But test in physical machine with fio ioengine=rbd sync=1, we can reache ~35K IOPS.
seems the qemu rbd is the bottleneck.

   qemu version is 2.1.2 with rbd_aio_flush patched.
    rbd cache is off, qemu cache=none.

So what's wrong with it? Is that normal? Could you give me some help?
Thanks very much.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux