Hi, I noticed an incredible high performance drop with mkfs.ext4 (as well as mkfs.xfs) when setting (almost) "any" value for rbd_qos_write_bps_limit (or rbd_qos_bps_limit). Baseline: 4TB rbd volume rbd_qos_write_bps_limit = 0 mkfs.ext4: real 0m6.688s user 0m0.000s sys 0m0.006s 50GB/s: 4TB rbd volume rbd_qos_write_bps_limit = 53687091200 mkfs.ext4: real 1m22.217s user 0m0.009s sys 0m0.000s 5GB/s: 4TB rbd volume rbd_qos_write_bps_limit = 5368709120 mkfs.ext4: real 13m39.770s user 0m0.008s sys 0m0.034s 500MB/s: 4TB rbd volume rbd_qos_write_bps_limit = 524288000 mkfs.ext4: test still runing... I can provide the result if needed. The tests are running on a client vm (Ubuntu 22.04) using Qemu/libvirt. Using the same values with Qemu/libvirt QoS does not affect mkfs performance. https://libvirt.org/formatdomain.html#block-i-o-tuning Ceph Version: 16.2.11 Qemu: 6.2.0 Libvirt: 8.0.0 Kernel (hypervisor host): 5.19.0-35-generic librbd1 (hypervisor host): 17.2.5 Could anyone pls confirm and explain what's going on? All the best, Florian
Attachment:
smime.p7s
Description: S/MIME cryptographic signature
_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx