>>(I think I see a PR about this on performance meeting pad some months ago) https://github.com/ceph/ceph/pull/25713 ----- Mail original ----- De: "aderumier" <aderumier@xxxxxxxxx> À: "Engelmann Florian" <florian.engelmann@xxxxxxxxxxxx> Cc: "ceph-users" <ceph-users@xxxxxxxxxxxxxx> Envoyé: Vendredi 8 Mars 2019 15:03:23 Objet: Re: rbd cache limiting IOPS >>Which options do we have to increase IOPS while writeback cache is used? If I remember they are some kind of global lock/mutex with rbd cache, and I think they are some work currently to improve it. (I think I see a PR about this on performance meeting pad some months ago) ----- Mail original ----- De: "Engelmann Florian" <florian.engelmann@xxxxxxxxxxxx> À: "ceph-users" <ceph-users@xxxxxxxxxxxxxx> Envoyé: Jeudi 7 Mars 2019 11:41:41 Objet: rbd cache limiting IOPS Hi, we are running an Openstack environment with Ceph block storage. There are six nodes in the current Ceph cluster (12.2.10) with NVMe SSDs and a P4800X Optane for rocksdb and WAL. The decision was made to use rbd writeback cache with KVM/QEMU. The write latency is incredible good (~85 µs) and the read latency is still good (~0.6ms). But we are limited to ~23.000 IOPS in a KVM machine. So we did the same FIO benchmark after we disabled the rbd cache and got 65.000 IOPS but of course the write latency (QD1) was increased to ~ 0.6ms. We tried to tune: rbd cache size -> 256MB rbd cache max dirty -> 192MB rbd cache target dirty -> 128MB but still we are locked at ~23.000 IOPS with enabled writeback cache. Right now we are not sure if the tuned settings have been honoured by libvirt. Which options do we have to increase IOPS while writeback cache is used? All the best, Florian _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com