have a problem with I/O performance on Openstack block device. *Enviroment:* *Openstack version: Ussuri* - OS: CentOS8 - Kernel: 4.18.0-240.15.1.el8_3.x86_64 - KVM: qemu-kvm-5.1.0-20.el8 *CEPH version: Octopus* - OS: CentOS8 - Kernel: 4.18.0-240.15.1.el8_3.x86_64 In CEPH Cluster we have 2 class: - Bluestore - HDD (only for cinder volume) - SSD (images, cinder volume) *Hardware:* *- *Ceph-client: 2x10Gbps (bond) MTU 9000 - Ceph-replicate: 2x10Gbps (bond) MTU 9000 *VM:* - Swapoff - non LVM *Issue* When creating a VM on Openstack using cinder volume HDD, it has really poor performance: 60-85 MB/s writes. And when test with ioping have high latency. *Diagnostic* 1. I have checked the performance between Compute Host (Openstack) and CEPH, and created an RBD (HDD class) mounted on Compute Host. And the performance is 300-400 MB/s. => The problem is in hypervisor But when I check performance on VM using cinder Volume SSD, the result equals performance when I test RBD (SSD) mounted on Compute host. 1. I already have to configure disk_cachemodes="network=writeback"(and enable rbd cache client) or test with disk_cachemodes="none" but nothing different. 2. Push iperf3 from compute host to random ceph host still has 20Gb traffic. 3. Compute Host and CEPH host connected to the same switch (layer2). Where else can I look for troubleshooting? Please help me in this case. Thank you. _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx