Re: U of Minn

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You are unlikely to manage to bottleneck HAProxy on anything except
the NIC, at least using normal configurations.

On Tue, Feb 16, 2021 at 9:12 AM Chip Cox <chip@xxxxxxxxxxxx> wrote:
>
> Is this your Graham?
>
> > On Feb 14, 2021, at 4:31 PM, Graham Allan <gta@xxxxxxx> wrote:
> >
> > On Tue, Feb 9, 2021 at 11:00 AM Matthew Vernon <mv3@xxxxxxxxxxxx> wrote:
> >
> >> On 07/02/2021 22:19, Marc wrote:
> >>>
> >>> I was wondering if someone could post a config for haproxy. Is there
> >> something specific to configure? Like binding clients to a specific backend
> >> server, client timeouts, security specific to rgw etc.
> >>
> >> Ours is templated out by ceph-ansible; to try and condense out just the
> >> interesting bits:
> >>
> >> (snipped the config...)
> >>
> >> The aim is to use all available CPU on the RGWs at peak load, but to
> >> also try and prevent one user overwhelming the service for everyone else
> >> - hence the dropping of idle connections and soft (and then hard) limits
> >> on per-IP connections.
> >>
> >
> > Can I ask a followup question to this: how many haproxy instances do you
> > then run - one on each of your gateways, with keepalived to manage which is
> > active?
> >
> > I ask because, since before I was involved with our ceph object store, it
> > has been load-balanced between multiple rgw servers directly using
> > bgp-ecmp. It doesn't sound like this is common practise in the ceph
> > community, and I'm wondering what the pros and cons are.
> >
> > The bgp-ecmp load balancing has the flaw that it's not truly fault
> > tolerant, at least without additional checks to shut down the local quagga
> > instance if rgw isn't responding - it's only fault tolerant in the case of
> > an entire server going down, which meets our original goals of rolling
> > maintenance/updates, but not a radosgw process going unresponsive. In
> > addition I think we have always seen some background level of clients being
> > sent "connection reset by peer" errors, which I have never tracked down
> > within radosgw; I wonder if these might be masked by an haproxy frontend?
> >
> > The converse is that all client gateway traffic must generally pass through
> > a single haproxy instance, while bgp-ecmp distributes the connections
> > across all nodes. Perhaps haproxy is lightweight and efficient enough that
> > this makes little difference to performance?
> >
> > Graham
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>
>
> Chip Cox
> Director, Sales  |  SoftIron
> 770.314.8300 <tel:770.314.8300>
> chip@xxxxxxxxxxxx
>  <mailto:chip@xxxxxxxxxxxx>
>
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux