Re: NFS Ganesha Active Active Question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You can fail from one running Ganesha to another, using something like ctdb or pacemaker/corosync. This is how some other clustered filesysytem (e.g. Gluster) use Ganesha. This is not how the Ceph community has decided to implement HA with Ganesha, so it will be a more manual setup for you, but it can be done.

Daniel

On 10/31/21 1:47 PM, Xiaolong Jiang wrote:
Hi Maged

Yea,  it requires the cloud integration to quickly fail over IP.  For me,  I probably need to have standby server and once i detect instance is dead.  I probably need to ask cephadm to schedule ganesha there and attach the ip to New server.

On Oct 31, 2021, at 10:40 AM, Maged Mokhtar <mmokhtar@xxxxxxxxxxx> wrote:



Hi Xiaolong

The grace period is 90 sec, the failover process should be automated and should run quicker than this, maybe like 15-30 sec ( not too quick to avoid false alarms ), this will make client io resume after a small pause.

/Maged

On 31/10/2021 17:37, Xiaolong Jiang wrote:
Hi Maged ,

Thank you for the response. That helps a lot!

Looks like I have to spin up a new server quickly and float the ip to the new server. If I spin up the server after about 20 mins, I guess IO will recover after that but the previous state will be gone since it passed the grace period?

On Oct 31, 2021, at 4:51 AM, Maged Mokhtar <mmokhtar@xxxxxxxxxxx> wrote:




On 31/10/2021 05:29, Xiaolong Jiang wrote:
Hi Experts.

I am a bit confused about ganesha active-active setup.

We can set up multiple ganesha servers on top of cephfs and clients can point to different ganesh server to serve the traffic. that can scale out the traffic.

From client side, is it using DNS round robin directly connecting to ganesha server ? Is it possible to front all ganesha server with a load balancer so client only connects load balancer IP and byte writes can load balancer across all ganesha server?

My current feeling is we probably have to use DNS way and specific client read/write request can only go to same ganesha server for the session.

--
Best regards,
Xiaolong Jiang

Senior Software Engineer at Netflix
Columbia University

_______________________________________________
Dev mailing list --dev@xxxxxxx
To unsubscribe send an email todev-leave@xxxxxxx


Load balancing ganesha means some clients are being served by a gateway and other clients by other gateways, so we distribute the clients and their load on the different gateways but each client remains on a specific gateway, you cannot have a single client load balance on several gateways.

A good way to distribute clients on the gateways is via round robin dns, but you do not have, you can distribute ips manually among your clients if you want, but dns automates the process in scalable way.

One note about high availability, currently you cannot failover clients to another ganesha gateway in case of failure, but if you bring the failed gateway back online quickly enough, the client connections will resume. So to support HA in case a host server failure, the ganesha gateways are implemented as containers so you can start the failed container on a new host server.

/Maged


_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux