Re: [RadosGW] Performance for Concurrency Connections

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Sep 12, 2013 at 4:11 AM, Fuchs, Andreas (SwissTXT)
<Andreas.Fuchs@xxxxxxxxxxx> wrote:
> Hi Yehuda
>
> I run a similar setup like Hugo
>
> Radosgw is on a dedicated host, no OSD's on the gw. There is no socket on my radosgw either apart from
> unix  2      [ ACC ]     STREAM     LISTENING     10029    /tmp/radosgw.sock
>

Try setting the socket path through the 'admin socket' configurable in
your ceph.conf. Just note that if you share your ceph.conf between
radosgw and radosgw-admin then they're going to override each other.

> I changed PG's of .rgw, .rgw.bucket to 1000
> Added rgw cache enabled = false to client.radosgw01.hostname in ceph.conf
>
> No change in terms of performance


Are there still 2 disks that are being hit the most? If so, try
connecting through the admin socket of the corresponding osds, see
what are the specific requests to these osds that thrash. You could
try to use also the radosgw admin socket (if you manage to set it up).

Yehuda

>
>> -----Original Message-----
>> From: Yehuda Sadeh [mailto:yehuda@xxxxxxxxxxx]
>> Sent: Donnerstag, 12. September 2013 06:38
>> To: Kuo Hugo
>> Cc: Fuchs, Andreas (SwissTXT); ceph-users@xxxxxxxxxxxxxx
>> Subject: Re:  [RadosGW] Performance for Concurrency
>> Connections
>>
>> On Wed, Sep 11, 2013 at 9:34 PM, Kuo Hugo <tonytkdk@xxxxxxxxx> wrote:
>> > Hi Yehuda,
>> >
>> > Here's my ceph.conf
>> >
>> > root@p01:/tmp# cat /etc/ceph/ceph.conf [global] fsid =
>> > 6e05675c-f545-4d88-9784-ea56ceda750e
>> > mon_initial_members = s01, s02, s03
>> > mon_host = 192.168.2.61,192.168.2.62,192.168.2.63
>> > auth_supported = cephx
>> > osd_journal_size = 1024
>> > filestore_xattr_use_omap = true
>> >
>> > [client.radosgw.gateway]
>> > host = p01
>> > keyring = /etc/ceph/keyring.radosgw.gateway rgw_socket_path =
>> > /tmp/radosgw.sock log_file = /var/log/ceph/radosgw.log
>> > rgw_thread_pool_size = 200
>> >
>> > Depends on my conf, the /tmp/radosgw.sock was created while starting
>> > radosgw service.
>> > So that I tried to show up config by :
>> >
>> > root@p01:/tmp# ceph --admin-daemon /tmp/radosgw.sock config show
>> read
>> > only got 0 bytes of 4 expected for response length; invalid command?
>> >
>> > Is it a bug or operation mistake ?
>>
>> You're connecting to the wrong socket. You need to connect to the admin
>> socket, not to the socket that used for web server <-> gateway
>> communication. That socket by default should reside in /var/run/ceph.
>>
>>
>> >
>> > root@p01:/tmp# radosgw-admin -v
>> > ceph version 0.61.8 (a6fdcca3bddbc9f177e4e2bf0d9cdd85006b028b)
>> >
>> >
>> > Appreciate ~
>> >
>> >
>> > +Hugo Kuo+
>> > (+886) 935004793
>> >
>> >
>> > 2013/9/11 Yehuda Sadeh <yehuda@xxxxxxxxxxx>
>> >>
>> >> On Wed, Sep 11, 2013 at 7:57 AM, Kuo Hugo <tonytkdk@xxxxxxxxx>
>> wrote:
>> >> >
>> >> > Hi Yehuda,
>> >> >
>> >> > I tried ... a question for modifying param.
>> >> > How to make it effect to the RadosGW ?   is it by restarting radosgw ?
>> >> > The value was set to 200. I'm not sure if it's applied to RadosGW
>> >> > or not.
>> >> >
>> >> > Is there a way to check the runtime value of "rgw thread pool size" ?
>> >> >
>> >>
>> >> You can do it through the admin socket interface.
>> >> Try running something like:
>> >> $ ceph --admin-daemon /var/run/ceph/radosgw.asok config show
>> >>
>> >> $ ceph --admin-daemon /var/run/ceph/radosgw.asok config set
>> >> rgw_thread_pool_size 200
>> >>
>> >>
>> >> The path to the admin socket may be different, and in any case can be
>> >> set through the 'admin socket' variable in ceph.conf.
>> >>
>> >> Yehuda
>> >>
>> >>
>> >> >
>> >> >
>> >> > 2013/9/11 Yehuda Sadeh <yehuda@xxxxxxxxxxx>
>> >> >>
>> >> >> Try modifing the 'rgw thread pool size' param in your ceph.conf.
>> >> >> By default it's 100, so try increasing it and see if it affects anything.
>> >> >>
>> >> >> Yehuda
>> >> >>
>> >> >>
>> >> >> On Wed, Sep 11, 2013 at 3:14 AM, Kuo Hugo <tonytkdk@xxxxxxxxx>
>> wrote:
>> >> >>>
>> >> >>> For ref :
>> >> >>>
>> >> >>> Benchmark result
>> >> >>>
>> >> >>> Could someone help me to improve the performance of high
>> >> >>> concurrency use case ?
>> >> >>>
>> >> >>> Any suggestion would be excellent.!
>> >> >>>
>> >> >>> +Hugo Kuo+
>> >> >>> (+886) 935004793
>> >> >>>
>> >
>> >
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux