Re: How you loadbalance your rgw endpoints?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

How many RGW are you using for this huge cluster?

Istvan Szabo
Senior Infrastructure Engineer
---------------------------------------------------
Agoda Services Co., Ltd.
e: istvan.szabo@xxxxxxxxx
---------------------------------------------------

-----Original Message-----
From: Svante Karlsson <svante.karlsson@xxxxxx> 
Sent: Monday, September 27, 2021 5:44 PM
To: Szabo, Istvan (Agoda) <Istvan.Szabo@xxxxxxxxx>
Cc: Ceph Users <ceph-users@xxxxxxx>
Subject: Re:  How you loadbalance your rgw endpoints?

Email received from the internet. If in doubt, don't click any link nor open any attachment !
________________________________

Hi Szabo,

we have a 7PB cluster that only servers s3 content for read heavy jobs running on a dedicated kubernetes cluster, all connections are 100G .
We overloaded first rgw gateways, and then the loadbalancers. The hackish solution we came up with is to add each kubernetetes node as ceph members and run a rgw on the node (outside kubernetes). We added a common extra ip address in iptables with a rule to map that to localhost. Finally each kubernetes job uses this common ip to the "local" rgw server. This way  we skip two hops of network traffic to the real gateway and this scales with the number of clients (kubernetes nodes)

Den fre 24 sep. 2021 kl 08:01 skrev Szabo, Istvan (Agoda)
<Istvan.Szabo@xxxxxxxxx>:
>
> Hi,
>
> Wonder how you guys do it due to we will always have limitation on the network bandwidth of the loadbalancer.
>
> Or if no balancer what to monitor if 1 rgw maxed out? I’m using 15rgw.
>
> Ty
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an 
> email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux