Hello All,
I have setup a ceph cluster based on 0.94.6 release in 2 servers each with 80Gb intel s3510 and 2x3 Tb 7.2 SATA disks,16 CPU,24G RAM
which is connected to a 10G switch with a replica of 2 [ i will add 3 more servers to the cluster] and 3 seperate monitor nodes which are vms.
rbd_cache is enabled in configurations,XFS filesystem,LSI 92465-4i raid card with 512Mb cache [ssd is in writeback mode wth BBU]
which is connected to a 10G switch with a replica of 2 [ i will add 3 more servers to the cluster] and 3 seperate monitor nodes which are vms.
rbd_cache is enabled in configurations,XFS filesystem,LSI 92465-4i raid card with 512Mb cache [ssd is in writeback mode wth BBU]
Before installing ceph, i tried to check max throughpit of intel 3500 80G SSD using block size of 4M [i read somewhere that ceph uses 4m objects] and it was giving 220mbps {dd if=/dev/zero of=/dev/sdb bs=4M count=1000 oflag=direct}
Observation:
Now the cluster is up and running and from the vm i am trying to write a 4g file to its volume using dd if=/dev/zero of=/dev/sdb bs=4M count=1000 oflag=direct .It takes aroud 39 seconds to write.
during this time ssd journal was showing disk write of 104M on both the ceph servers (dstat sdb) and compute node a network transfer rate of ~110M on its 10G storage interface(dstat -nN eth2]
my questions are:
Now the cluster is up and running and from the vm i am trying to write a 4g file to its volume using dd if=/dev/zero of=/dev/sdb bs=4M count=1000 oflag=direct .It takes aroud 39 seconds to write.
during this time ssd journal was showing disk write of 104M on both the ceph servers (dstat sdb) and compute node a network transfer rate of ~110M on its 10G storage interface(dstat -nN eth2]
my questions are:
- Is this the best throughput ceph can offer or can anything in my environment be optmised to get more performance? [iperf shows a max throughput 9.8Gbits/s]
- I guess Network/SSD is under utilized and it can handle more writes how can this be improved to send more data over network to ssd?
- rbd kernel module wasn't loaded on compute node,i loaded it manually using "modprobe" and later destroyed/re-created vms,but this doesnot give any performance boost. So librbd and RBD are equally fast?
- Samsung evo 840 512Gb shows a throughput of 500Mbps for 4M writes [dd if=/dev/zero of=/dev/sdb bs=4M count=1000 oflag=direct] and for 4Kb it was equally fast as that of intel S3500 80gb .Does changing my SSD from intel s3500 100Gb to Samsung 840 500Gb make any performance difference here just because for 4M wirtes samsung 840 evo is faster?Can Ceph utilize this extra speed.Since samsung evo 840 is faster in 4M writes.
Can somebody help me understand this better.
Regards,Kevin
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com