Hi Maged ,
Thank you for the response. That helps a lot!
Looks like I have to spin up a new server quickly and float the ip to the new server. If I spin up the server after about 20 mins, I guess IO will recover after that but the previous state will be gone since it passed the grace period? On Oct 31, 2021, at 4:51 AM, Maged Mokhtar <mmokhtar@xxxxxxxxxxx> wrote:
On 31/10/2021 05:29, Xiaolong Jiang
wrote:
Hi Experts.
I am a bit confused about ganesha active-active setup.
We can set up multiple ganesha servers on top of cephfs and
clients can point to different ganesh server to serve the
traffic. that can scale out the traffic.
From client side, is it using DNS round robin directly
connecting to ganesha server ?
Is it possible to front all ganesha server with a load
balancer so client only connects load balancer IP and byte
writes can load balancer across all ganesha server?
My current feeling is we probably have to use DNS way and
specific client read/write request can only go to same ganesha
server for the session.
--
Best regards,
Xiaolong Jiang
Senior Software Engineer at Netflix
Columbia University
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx
Load balancing ganesha means some clients are being served by a
gateway and other clients by other gateways, so we distribute the
clients and their load on the different gateways but each client
remains on a specific gateway, you cannot have a single client
load balance on several gateways.
A good way to distribute clients on the gateways is via round
robin dns, but you do not have, you can distribute ips manually
among your clients if you want, but dns automates the process in
scalable way.
One note about high availability, currently you cannot failover
clients to another ganesha gateway in case of failure, but if you
bring the failed gateway back online quickly enough, the client
connections will resume. So to support HA in case a host server
failure, the ganesha gateways are implemented as containers so you
can start the failed container on a new host server.
/Maged
|
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx