Re: share haproxy config for radosgw [EXT]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 14/02/2021 21:31, Graham Allan wrote:

On Tue, Feb 9, 2021 at 11:00 AM Matthew Vernon <mv3@xxxxxxxxxxxx <mailto:mv3@xxxxxxxxxxxx>> wrote:

    On 07/02/2021 22:19, Marc wrote:
     >
     > I was wondering if someone could post a config for haproxy. Is
    there something specific to configure? Like binding clients to a
    specific backend server, client timeouts, security specific to rgw etc.

    Ours is templated out by ceph-ansible; to try and condense out just the
    interesting bits:

    (snipped the config...)

    The aim is to use all available CPU on the RGWs at peak load, but to
    also try and prevent one user overwhelming the service for everyone
    else
    - hence the dropping of idle connections and soft (and then hard)
    limits
    on per-IP connections.


Can I ask a followup question to this: how many haproxy instances do you then run - one on each of your gateways, with keepalived to manage which is active?

One on each gateway, yes. We use RIP - each RGW listens on each of the 6 service ips (and knows about all 6 RGWs so haproxy can hand off traffic if over-loaded). The switches do some work to make sure traffic from our OpenStack goes to its "nearest" RGW where possible.

Like the setup you describe, RIP has no way of knowing if the radosgw has gone down but the host is otherwise up; but haproxy can tell that, which I think is an advantage.

We needed to tune the haproxy and radosgw setup to get as much out of the gateway hardware as possible (we used cosbench); redoing the benchmarking bypassing haproxy showed that haproxy had very little impact on performance.

Regards,

Matthew




--
The Wellcome Sanger Institute is operated by Genome Research Limited, a charity registered in England with number 1021457 and a company registered in England with number 2742969, whose registered office is 215 Euston Road, London, NW1 2BE. _______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux