randwrite iops of rbd volume in kvm decrease after several hours with qemu threads and cpu usage on host increasing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi experts,

When I test io performance of rbd volume in pure ssd pool with fio in
kvm vm, the iops decreased from 15k to 5k, while nums of qemu
threads on host increased from about 200 to about 700, cpu usage
of qemu process on host increased from 600% to 1400%.

My testing scene is as following:
rw=randwrite
direct=1
numjobs=64
ioengine=sync
bsrange=4k-4k
runtime=180

The version of some packages are as following:
ceph: 0.94.3
qemu-kvm: 2.1.2
host kernel: 3.10

What's maybe the problem?

Appreciate for any help.

Best Regards,
Jackie


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux