Hi folks,
I deployed a Ceph cluster with 10Gb network devices. But the max bandwidth usage only 100MB/sec
Do I need to enable or setup anything for 10Gb support ?
My Rados Bench
Total time run: 101.265252Total writes made: 236Write size: 40485760Bandwidth (MB/sec): 89.982Stddev Bandwidth: 376.238Max bandwidth (MB/sec): 3822.41Min bandwidth (MB/sec): 0Average Latency: 33.9225Stddev Latency: 12.8661Max latency: 43.6013Min latency: 1.03948
I check the network bandwidth between nodes by iperf .
[Iperf]
>From BM to RadosGW
local 192.168.2.51 port 5001 connected with 192.168.2.40 port 394210.0-10.0 sec 10.1 GBytes 8.69 Gbits/sec
From RadosGW to Rados nodes
[ 3] local 192.168.2.51 port 52256 connected with 192.168.2.61 port 5001[ 3] 0.0-10.0 sec 10.7 GBytes 9.19 Gbits/sec[ 3] local 192.168.2.51 port 52256 connected with 192.168.2.62 port 5001[ 3] 0.0-10.0 sec 9.2 GBytes 8.1 Gbits/sec[ 3] local 192.168.2.51 port 51196 connected with 192.168.2.63 port 5001[ 3] 0.0-10.0 sec 10.7 GBytes 9.21 Gbits/sec
All OSDs are listening on 192.168.2.x
My OSD dump :
2013-09-12 07:43:42.556501 7f026a66b780 -1 asok(0x1c9d510) AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: failed to bind the UNIX domain socket to '/var/run/ceph': (98) Address already in useepoch 237fsid 6e05675c-f545-4d88-9784-ea56ceda750ecreated 2013-09-06 00:16:50.324070modified 2013-09-09 19:56:55.395161flagspool 0 'data' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 owner 0 crash_replay_interval 45pool 1 'metadata' rep size 2 min_size 1 crush_ruleset 1 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 owner 0pool 2 'rbd' rep size 2 min_size 1 crush_ruleset 2 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 owner 0pool 13 '.rgw.gc' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 213 owner 18446744073709551615pool 14 '.rgw.control' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 214 owner 18446744073709551615pool 15 '.users.uid' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 216 owner 18446744073709551615pool 16 '.users' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 218 owner 18446744073709551615pool 17 '.rgw' rep size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 400 pgp_num 400 last_change 231 owner 0pool 18 '.rgw.buckets' rep size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 400 pgp_num 400 last_change 235 owner 0pool 19 '.users.swift' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 236 owner 18446744073709551615max_osd 30osd.0 up in weight 1 up_from 222 up_thru 233 down_at 221 last_clean_interval [202,219) 192.168.2.61:6809/9155 192.168.2.61:6810/9155 192.168.2.61:6811/9155 exists,up e230eb86-8ed4-4ce6-90ef-60197cd4a6ad..osd.10 up in weight 1 up_from 223 up_thru 233 down_at 222 last_clean_interval [70,219) 192.168.2.62:6827/21018 192.168.2.62:6828/21018 192.168.2.62:6829/21018 exists,up 3bfc59f4-e11c-4acf-941d-b8fd0a789ce3..osd.20 up in weight 1 up_from 221 up_thru 233 down_at 220 last_clean_interval [118,219) 192.168.2.63:6801/19185 192.168.2.63:6807/19185 192.168.2.63:6810/19185 exists,up 62833dc4-79b4-4905-8432-a26a8d1aaf06..
+Hugo Kuo+
(+886) 935004793
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com