rbd cache limiting IOPS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

we are running an Openstack environment with Ceph block storage. There are six nodes in the current Ceph cluster (12.2.10) with NVMe SSDs and a P4800X Optane for rocksdb and WAL. The decision was made to use rbd writeback cache with KVM/QEMU. The write latency is incredible good (~85 µs) and the read latency is still good (~0.6ms). But we are limited to ~23.000 IOPS in a KVM machine. So we did the same FIO benchmark after we disabled the rbd cache and got 65.000 IOPS but of course the write latency (QD1) was increased to ~ 0.6ms.
We tried to tune:

rbd cache size -> 256MB
rbd cache max dirty -> 192MB
rbd cache target dirty -> 128MB

but still we are locked at ~23.000 IOPS with enabled writeback cache.

Right now we are not sure if the tuned settings have been honoured by libvirt.

Which options do we have to increase IOPS while writeback cache is used?

All the best,
Florian

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux