how to improve performance of rbd&cloudstack

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello, I am from China, I hope you can read my poor english as belows .

We are doing a basic test with ceph and cloudstack.
experimental environment :
1.   four ceph-osds running on two nodes(centos6.2),both of them get three 1GB phisical disks(we build osds on /dev/sdb and /dev/sdc).
      so we get 4GB rbd storage space to use.
      nodes status: cpu---Intel(R) Xeon(R) CPU L5520  @ 2.27GHz
                            ram memory---48GB
                            NIC: Speed: 1000Mb/s
2.   one ceph-monitor runnin g on another node(ubuntu13.04).
3.   one kvm host node(ceph-client) on which several guest vms run.(we use one of them  to do the test)

we test on the guest vm, who has two rbd-based disks(10GB rootdisk and 20GB datadisk).
we log on to the vm and test the disk's write&read performace as belows.

write speed:  wget http://remote server ip/2GB.file ,    we get a write speed at an average speed of 6MB/s.(far behind expected)
(we must get something wrong there, we would appreciate a lot if any help comes from you. we think the problems comes from the kvm emulator, but we are not sure, can you give us some advice to improve our vm's disk performance in the aspect of writing speed?)

read speed: wget http://local server ip/2GB.file -O /dev/null , the average read speed is 39.8MB/s, (that seems great.)

ps: on host server we do the rbd read/write testing too, it works perfect: 80MB/s (read/write).


Best regards.




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux