RBD image QoS rbd_qos_write_bps_limit and rbd_qos_bps_limit and mkfs performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I noticed an incredible high performance drop with mkfs.ext4 (as well as mkfs.xfs) when setting (almost) "any" value for rbd_qos_write_bps_limit (or rbd_qos_bps_limit).

Baseline: 4TB rbd volume  rbd_qos_write_bps_limit = 0
mkfs.ext4:
real    0m6.688s
user    0m0.000s
sys     0m0.006s

50GB/s: 4TB rbd volume  rbd_qos_write_bps_limit = 53687091200
mkfs.ext4:
real    1m22.217s
user    0m0.009s
sys     0m0.000s

5GB/s: 4TB rbd volume  rbd_qos_write_bps_limit = 5368709120
mkfs.ext4:
real    13m39.770s
user    0m0.008s
sys     0m0.034s

500MB/s: 4TB rbd volume  rbd_qos_write_bps_limit = 524288000
mkfs.ext4:
test still runing... I can provide the result if needed.

The tests are running on a client vm (Ubuntu 22.04) using Qemu/libvirt.

Using the same values with Qemu/libvirt QoS does not affect mkfs performance.
https://libvirt.org/formatdomain.html#block-i-o-tuning

Ceph Version: 16.2.11
Qemu: 6.2.0
Libvirt: 8.0.0
Kernel (hypervisor host): 5.19.0-35-generic 
librbd1 (hypervisor host): 17.2.5

Could anyone pls confirm and explain what's going on?

All the best,
Florian

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux