High iowait when using Ceph NVME

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
Currently, I'm testing Ceph v17.2.7 with NVMe. When mapping an rbd image to physical compute host, "fio bs=4k iodepth=128 randwrite" give 150k IOPS. I have a VM that located within the compute host, and the fio give ~40k IOPS and with 50 %iowait. I know there is a bottleneck, I'm not sure where it is though

I have tried to enabled iothread in virsh but nothing change. Anyone have any ideal?

Thanks in advance.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux