sorry for my fault in the email. the four disks are all of a size---1TB,not 'GB',I make a mistakes in the email content. Sorry.
At 2013-06-28 22:52:17,"Gregory Farnum" <greg@xxxxxxxxxxx> wrote:
It sounds like you just built a 4GB (or 6GB?) RADOS cluster and then tried to put 4GB of data into it. That won't work; the underlying local filesystems probably started having trouble with allocation issues as soon as you go to 2GB free.-Greg
On Friday, June 28, 2013, 华仔 wrote:Hello, I am from China, I hope you can read my poor english as belows .
We are doing a basic test with ceph and cloudstack.
experimental environment :
1. four ceph-osds running on two nodes(centos6.2),both of them get three 1GB phisical disks(we build osds on /dev/sdb and /dev/sdc).
so we get 4GB rbd storage space to use.
nodes status: cpu---Intel(R) Xeon(R) CPU L5520 @ 2.27GHz
ram memory---48GB
NIC: Speed: 1000Mb/s
2. one ceph-monitor runnin g on another node(ubuntu13.04).
3. one kvm host node(ceph-client) on which several guest vms run.(we use one of them to do the test)
we test on the guest vm, who has two rbd-based disks(10GB rootdisk and 20GB datadisk).
we log on to the vm and test the disk's write&read performace as belows.
write speed: wget http://remote server ip/2GB.file , we get a write speed at an average speed of 6MB/s.(far behind expected)
(we must get something wrong there, we would appreciate a lot if any help comes from you. we think the problems comes from the kvm emulator, but we are not sure, can you give us some advice to improve our vm's disk performance in the aspect of writing speed?)
read speed: wget http://local server ip/2GB.file -O /dev/null , the average read speed is 39.8MB/s, (that seems great.)
ps: on host server we do the rbd read/write testing too, it works perfect: 80MB/s (read/write).
Best regards.
--
Software Engineer #42 @ http://inktank.com | http://ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com