Re: [RadosGW] Performance for Concurrency Connections

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yes. I restart it by /etc/init.d/radosgw for times before.  :D

btw, I check several things here to prevent any permission issue. 

root@p01:/var/run# /etc/init.d/radosgw start
Starting client.radosgw.gateway...
root@p01:/var/run# ps aux | grep rados
root     25823  1.8  0.0 16436340 7096 ?       Ssl  22:30   0:00 /usr/bin/radosgw -n client.radosgw.gateway
root     26055  0.0  0.0   9384   920 pts/0    S+   22:30   0:00 grep --color=auto rados
root@p01:/var/run# ls ceph/
root@p01:/var/run# ls -ald ceph/
drwxrwxrwx 2 root root 40 Sep  9 07:47 ceph/


apparently, the radosgw is running by root. For safety, I change the mode of /var/run/ceph to fully opened. 
Still no luck with radosgw admin socket stuff. 

One more thing about the performance is that performance down to 20~30% once run out of memory to cache inode. A quick reference number is 1KB object uploading test. 
The performance from 1200reqs/sec --> 300reqs/sec.  That's a potential issue which I observed.  Any way to work around would be great. 



+Hugo Kuo+
(+886) 935004793


2013/9/12 Yehuda Sadeh <yehuda@xxxxxxxxxxx>
On Wed, Sep 11, 2013 at 10:25 PM, Kuo Hugo <tonytkdk@xxxxxxxxx> wrote:
>
> thanks
>
> 1) I'm sure there's no asok socket filer for the radosgw in my RadosGW host.
> 2) The rgw_thread_pool_size was set to 200 in my ceph.conf. So that the
> radosgw is using the value now generally.
> 3) If so, the tweaking of rgw_thread_pool_size value from 100->200 was not

Did you restart your gateway afterwards?

> help for improve the performance of concurrency connection.
> 4) I'm considering to put some research on apache's configurations.
> 5) Do ya have a similar benchmark run for high concurrency connections
> before?
>
> Cheers
>
>
> +Hugo Kuo+
> (+886) 935004793
>
>
> 2013/9/12 Yehuda Sadeh <yehuda@xxxxxxxxxxx>
>>
>> On Wed, Sep 11, 2013 at 9:57 PM, Kuo Hugo <tonytkdk@xxxxxxxxx> wrote:
>> > Hmm.... Interesting now.
>> >
>> > I have no admin socket opened around.
>>
>> Maybe your radosgw process doesn't have permissions to write into
>> /var/run/ceph?
>>
>> >
>> > root@p01:/var/run/ceph# ls /var/run/ceph -al
>> > total 0
>> > drwxr-xr-x  2 root root  40 Sep  9 07:47 .
>> > drwxr-xr-x 17 root root 600 Sep 11 21:23 ..
>> > root@p01:/var/run/ceph# lsof | grep radosgw.asok
>> > root@p01:/var/run/ceph#
>> >
>> > I review the on-line doc for radosgw :
>> > http://ceph.com/docs/next/radosgw/config-ref/
>> > There's no configuration for rgw admin socket tho.
>>
>> It's a generic ceph configurable. It's 'admin socket'.
>>
>> >
>> > root@s01:~# ceph --admin-daemon /var/run/ceph/ceph-osd.2.asok config
>> > show |
>> > grep rgw_thread
>> >   "rgw_thread_pool_size": "100",
>> >
>> >
>> > I found that the OSD config information includes rgw_thread_pool_size ,
>> > is
>> > this what you mentioned ?
>> > Why that the value is on OSD?
>> > Where is the value of rgw_thread_pool_size that OSDs referenced from ?
>> >
>>
>> The ceph global config holds that variable, the osd just gets all the
>> defaults but it has no use for it.
>>
>>
>> Yehuda
>>
>> >
>> > +Hugo Kuo+
>> > (+886) 935004793
>> >
>> >
>> > 2013/9/12 Yehuda Sadeh <yehuda@xxxxxxxxxxx>
>> >>
>> >> On Wed, Sep 11, 2013 at 9:34 PM, Kuo Hugo <tonytkdk@xxxxxxxxx> wrote:
>> >> > Hi Yehuda,
>> >> >
>> >> > Here's my ceph.conf
>> >> >
>> >> > root@p01:/tmp# cat /etc/ceph/ceph.conf
>> >> > [global]
>> >> > fsid = 6e05675c-f545-4d88-9784-ea56ceda750e
>> >> > mon_initial_members = s01, s02, s03
>> >> > mon_host = 192.168.2.61,192.168.2.62,192.168.2.63
>> >> > auth_supported = cephx
>> >> > osd_journal_size = 1024
>> >> > filestore_xattr_use_omap = true
>> >> >
>> >> > [client.radosgw.gateway]
>> >> > host = p01
>> >> > keyring = /etc/ceph/keyring.radosgw.gateway
>> >> > rgw_socket_path = /tmp/radosgw.sock
>> >> > log_file = /var/log/ceph/radosgw.log
>> >> > rgw_thread_pool_size = 200
>> >> >
>> >> > Depends on my conf, the /tmp/radosgw.sock was created while starting
>> >> > radosgw
>> >> > service.
>> >> > So that I tried to show up config by :
>> >> >
>> >> > root@p01:/tmp# ceph --admin-daemon /tmp/radosgw.sock config show
>> >> > read only got 0 bytes of 4 expected for response length; invalid
>> >> > command?
>> >> >
>> >> > Is it a bug or operation mistake ?
>> >>
>> >> You're connecting to the wrong socket. You need to connect to the
>> >> admin socket, not to the socket that used for web server <-> gateway
>> >> communication. That socket by default should reside in /var/run/ceph.
>> >>
>> >>
>> >> >
>> >> > root@p01:/tmp# radosgw-admin -v
>> >> > ceph version 0.61.8 (a6fdcca3bddbc9f177e4e2bf0d9cdd85006b028b)
>> >> >
>> >> >
>> >> > Appreciate ~
>> >> >
>> >> >
>> >> > +Hugo Kuo+
>> >> > (+886) 935004793
>> >> >
>> >> >
>> >> > 2013/9/11 Yehuda Sadeh <yehuda@xxxxxxxxxxx>
>> >> >>
>> >> >> On Wed, Sep 11, 2013 at 7:57 AM, Kuo Hugo <tonytkdk@xxxxxxxxx>
>> >> >> wrote:
>> >> >> >
>> >> >> > Hi Yehuda,
>> >> >> >
>> >> >> > I tried ... a question for modifying param.
>> >> >> > How to make it effect to the RadosGW ?   is it by restarting
>> >> >> > radosgw
>> >> >> > ?
>> >> >> > The value was set to 200. I'm not sure if it's applied to RadosGW
>> >> >> > or
>> >> >> > not.
>> >> >> >
>> >> >> > Is there a way to check the runtime value of "rgw thread pool
>> >> >> > size" ?
>> >> >> >
>> >> >>
>> >> >> You can do it through the admin socket interface.
>> >> >> Try running something like:
>> >> >> $ ceph --admin-daemon /var/run/ceph/radosgw.asok config show
>> >> >>
>> >> >> $ ceph --admin-daemon /var/run/ceph/radosgw.asok config set
>> >> >> rgw_thread_pool_size 200
>> >> >>
>> >> >>
>> >> >> The path to the admin socket may be different, and in any case can
>> >> >> be
>> >> >> set through the 'admin socket' variable in ceph.conf.
>> >> >>
>> >> >> Yehuda
>> >> >>
>> >> >>
>> >> >> >
>> >> >> >
>> >> >> > 2013/9/11 Yehuda Sadeh <yehuda@xxxxxxxxxxx>
>> >> >> >>
>> >> >> >> Try modifing the 'rgw thread pool size' param in your ceph.conf.
>> >> >> >> By
>> >> >> >> default it's 100, so try increasing it and see if it affects
>> >> >> >> anything.
>> >> >> >>
>> >> >> >> Yehuda
>> >> >> >>
>> >> >> >>
>> >> >> >> On Wed, Sep 11, 2013 at 3:14 AM, Kuo Hugo <tonytkdk@xxxxxxxxx>
>> >> >> >> wrote:
>> >> >> >>>
>> >> >> >>> For ref :
>> >> >> >>>
>> >> >> >>> Benchmark result
>> >> >> >>>
>> >> >> >>> Could someone help me to improve the performance of high
>> >> >> >>> concurrency
>> >> >> >>> use case ?
>> >> >> >>>
>> >> >> >>> Any suggestion would be excellent.!
>> >> >> >>>
>> >> >> >>> +Hugo Kuo+
>> >> >> >>> (+886) 935004793
>> >> >> >>>
>> >> >
>> >> >
>> >
>> >
>
>

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux