[performance] rbd kernel module versus qemu librbd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, 
I have a question about the IOPS performance for real machine and virtual machine.
Here is my test situation:
1. ssd pool  (9 OSD servers with 2 osds on each server, 10Gb networks for public & cluster networks)
2. volume1: use rbd create a 100G volume from the ssd pool and map to the real machine
3. volume2: use cinder create a 100G volume form the ssd pool and atach to a guest host
4. disable rbd cache
5. fio test on the two volues:
[global]
rw=randwrite
bs=4k
ioengine=libaio
iodepth=64
direct=1
size=64g
runtime=300s
group_reporting=1
thread=1

volume1 got about 24k IOPS and volume got about 14k IOPS.

We could see performance of volume2 is not good compare to volume1, so is it normal behabior of guest host?
If not, what maybe the problem?

Thanks!

hzwulibin@xxxxxxxxx
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux