Thanks, I'll disable it later. perhaps that's related to another problem I had here.
By my network topology plan. I expect that the max bandwidth could reach to 10Gb. I double checked every connection in this cluster by iperf.
[Iperf]
>From BM to RadosGW
local 192.168.2.51 port 5001 connected with 192.168.2.40 port 394210.0-10.0 sec 10.1 GBytes 8.69 Gbits/sec
From RadosGW to Rados nodes
[ 3] local 192.168.2.51 port 52256 connected with 192.168.2.61 port 5001[ 3] 0.0-10.0 sec 10.7 GBytes 9.19 Gbits/sec
[ 3] local 192.168.2.51 port 52256 connected with 192.168.2.62 port 5001[ 3] 0.0-10.0 sec 9.2 GBytes 8.1 Gbits/sec[ 3] local 192.168.2.51 port 51196 connected with 192.168.2.63 port 5001[ 3] 0.0-10.0 sec 10.7 GBytes 9.21 Gbits/sec
I can only push data about 100MB/sec , Does Ceph require any configuration to enable 10Gb support ?
Total time run: 101.265252Total writes made: 236Write size: 40485760Bandwidth (MB/sec): 89.982Stddev Bandwidth: 376.238Max bandwidth (MB/sec): 3822.41Min bandwidth (MB/sec): 0Average Latency: 33.9225Stddev Latency: 12.8661Max latency: 43.6013Min latency: 1.03948
+Hugo Kuo+
(+886) 935004793
2013/9/12 Yehuda Sadeh <yehuda@xxxxxxxxxxx>
On Wed, Sep 11, 2013 at 10:37 PM, Kuo Hugo <tonytkdk@xxxxxxxxx> wrote:You can try disabling the rgw cache. We just found out an issue with
> Yes. I restart it by /etc/init.d/radosgw for times before. :D
>
> btw, I check several things here to prevent any permission issue.
>
> root@p01:/var/run# /etc/init.d/radosgw start
> Starting client.radosgw.gateway...
> root@p01:/var/run# ps aux | grep rados
> root 25823 1.8 0.0 16436340 7096 ? Ssl 22:30 0:00
> /usr/bin/radosgw -n client.radosgw.gateway
> root 26055 0.0 0.0 9384 920 pts/0 S+ 22:30 0:00 grep
> --color=auto rados
> root@p01:/var/run# ls ceph/
> root@p01:/var/run# ls -ald ceph/
> drwxrwxrwx 2 root root 40 Sep 9 07:47 ceph/
>
>
> apparently, the radosgw is running by root. For safety, I change the mode of
> /var/run/ceph to fully opened.
> Still no luck with radosgw admin socket stuff.
>
> One more thing about the performance is that performance down to 20~30% once
> run out of memory to cache inode. A quick reference number is 1KB object
> uploading test.
> The performance from 1200reqs/sec --> 300reqs/sec. That's a potential issue
> which I observed. Any way to work around would be great.
>
>
it so it may be interesting to see how the system behaves without it:
rgw cache enabled = false
Yehuda
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com