Re: rbd cache limiting IOPS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I was able to check the used settings by a ceph.conf like:

[client.nova]
admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok
log file = /var/log/ceph/client.log
debug rbd = 20
debug librbd = 20
rbd_cache = true
rbd cache size = 268435456
rbd cache max dirty = 201326592
rbd cache target dirty = 134217728

and than ask the socket:

ceph --admin-daemon /var/run/ceph/ceph-client.nova.17276.94854801343568.asok config get rbd_cache_size
{
    "rbd_cache_size": "268435456"
}


So the settings are recognized and used by qemu. But any value higher than the default (32MB) of the cache size leads to strange IOPS results. IOPS are very constant with 32MB ~20.000 - 23.000 but if we define a bigger cache size (we tested from 64MB up to 256MB) the IOPS get very unconstant (from 0 IOPS up to 23.000).

Setting "rbd cache max dirty" to 0 changes the behaviour to write-through as far as I understood. I expected the latency to increase to at least 0.6 ms what was the case but I also expected the IOPS to increase to up to 60.000 which was not the case. IOPS was constant at ~ 14.000IOPS (4 jobs, QD=64).



Am 3/7/19 um 11:41 AM schrieb Florian Engelmann:
Hi,

we are running an Openstack environment with Ceph block storage. There are six nodes in the current Ceph cluster (12.2.10) with NVMe SSDs and a P4800X Optane for rocksdb and WAL. The decision was made to use rbd writeback cache with KVM/QEMU. The write latency is incredible good (~85 µs) and the read latency is still good (~0.6ms). But we are limited to ~23.000 IOPS in a KVM machine. So we did the same FIO benchmark after we disabled the rbd cache and got 65.000 IOPS but of course the write latency (QD1) was increased to ~ 0.6ms.
We tried to tune:

rbd cache size -> 256MB
rbd cache max dirty -> 192MB
rbd cache target dirty -> 128MB

but still we are locked at ~23.000 IOPS with enabled writeback cache.

Right now we are not sure if the tuned settings have been honoured by libvirt.

Which options do we have to increase IOPS while writeback cache is used?

All the best,
Florian

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


--

EveryWare AG
Florian Engelmann
Senior UNIX Systems Engineer
Zurlindenstrasse 52a
CH-8003 Zürich

tel: +41 44 466 60 00
fax: +41 44 466 60 10
mail: mailto:florian.engelmann@xxxxxxxxxxxx
web: http://www.everyware.ch

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux