Re: 3 node NFS-Ganesha Cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 11/30/2015 03:26 PM, Soumya Koduri wrote:
Hi,
But are you telling me that in a 3-node cluster,
quorum is lost when one of the nodes ip is down?

yes. Its the limitation with Pacemaker/Corosync. If the nodes
participating in cluster cannot communicate with majority of them
(quorum is lost), then the cluster is shut down.


However i am setting up a additional node to test a 4-node setup, but
even then if i put down one node and nfs-grace_start
(/usr/lib/ocf/resource.d/heartbeat/ganesha_grace) did not run properly
on the other nodes, could it be that the whole cluster goes down as
quorum lost again?

That's strange. We have tested quite a few times such configuration but
haven't hit this issue. (CCin Saurabh who has been testing many such
configurations).
forgot to CC in the earlier mail.


Recently we have observed resource agents (nfs-grace_*) timing out
sometimes esp when any node is taken down. But that shouldn't cause the
entire cluster to shutdown.
Could you check the logs (/var/log/messages, /var/log/pacemaker.log) for
any error/warning reported when one node is taken down in case of 4-node
setup.

Thanks,
Soumya
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux