Re: bootstrapping cluster "failure" condition fix for local mounts (like: "gluster volume set all cluster.enable-shared-storage enable")

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 09/05/17 19:18, hvjunk wrote:

On 03 May 2017, at 07:49 , Jiffin Tony Thottan <jthottan@xxxxxxxxxx> wrote:



On 02/05/17 15:27, hvjunk wrote:
Good day,

I’m busy setting up/testing NFS-HA with GlusterFS storage across VMs running Debian 8. GlusterFS volume to be "replica 3 arbiter 1"

In the NFS-ganesha information I’ve gleamed thus far, it mentions the "gluster volume set all cluster.enable-shared-storage enable”.

My first question is this: is that shared volume that gets created/setup, suppose to be resilient across reboots?
 It appears to not be the case in my test setup thus far, that that mount doesn’t get recreated/remounted after a reboot.

Following is the script which creates shared storage and mount it in the node, plus an entry will be added to /etc/fstab
https://github.com/gluster/glusterfs/blob/master/extras/hook-scripts/set/post/S32gluster_enable_shared_storage.sh

But there is a possibility such that, if glusterd(I hope u have enabled enabled glusterd service) is not started before
systemd tries mount the shared storage then it will fail.



Thanks for the systemd helper script
--
Jiffin

Thank Jiffin,

 I since found that (1) you need to wait a bit  for the cluster to “settle” with that script having executed, before you reboot the cluster (As you might see in my bitbucket ansible scripts in https://bitbucket.org/dismyne/gluster-ansibles/src ) … something to add in the manuals perhaps to warn people to wait for that script to finish before rebooting node/vm/server(s)?

 (2) the default configuration, can’t bootstrap the /gluster_shared_storage volume/directory reliably from a clean shutdown-reboot of the whole cluster!!!

The problem: SystemD and it’s wanting to have the control over /etc/fstab and the mounting, and and and…. (and I’ll not empty my mind about L.P. based on his remarks in: https://github.com/systemd/systemd/issues/4468#issuecomment-255711912 after my struggling with this issue)


To have a reliably bootstrapped (from all nodes down booting up) I'm using the following SystemD service and helper script(s) to have the gluster cluster node mount their local mounts (like /gluster_shared_storage) reliably:









--
Jiffin

If the mount is not resilient, ie. not recreated/mounted by glusterfs and neither added to the /etc/fstab by glusterfs, why the initial auto mount by glusterfs and not afterwards with a reboot?

The biggest “issue” I have found with glusterfs is the interaction with SystemD and mounts that fails and don’t get properly retried later (Will email separately on that issue) during bootstrapping of the cluster, and that is why I need to confirm the reasoning/etc. on this initial auto-mounting, but then the need to manually add it into the /etc/fstab

Thank you
Hendrik

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users



_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux