Re: glusterfs, ganesh, and pcs rules

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Have you tried with :

VIP_tlxdmz-nfs1="10.X.X.181"
VIP_tlxdmz-nfs2="10.X.X.182"

Instead of :

VIP_server1="10.X.X.181"

VIP_server2="10.X.X.182"

 

Also, I don’t have the HA_VOL_SERVER in my settings but i’m using gluster 3.10.x. I think it’s deprecated in 3.10 but not in 3.8.

 

Your /etc/hosts file or your DNS is correct for these servers ?

 

Renaud

 

 

De : Hetz Ben Hamo [mailto:hetz@xxxxxxxx]
Envoyé : 24 décembre 2017 04:33
À : Renaud Fortier <Renaud.Fortier@xxxxxxxxxxxxxx>
Cc : gluster-users@xxxxxxxxxxx
Objet : Re: [Gluster-users] glusterfs, ganesh, and pcs rules

 

I checked, and I have it like this:

 

# Name of the HA cluster created.

# must be unique within the subnet

HA_NAME="ganesha-nfs"

#

# The gluster server from which to mount the shared data volume.

HA_VOL_SERVER="tlxdmz-nfs1"

#

# N.B. you may use short names or long names; you may not use IP addrs.

# Once you select one, stay with it as it will be mildly unpleasant to

# clean up if you switch later on. Ensure that all names - short and/or

# long - are in DNS or /etc/hosts on all machines in the cluster.

#

# The subset of nodes of the Gluster Trusted Pool that form the ganesha

# HA cluster. Hostname is specified.

HA_CLUSTER_NODES="tlxdmz-nfs1,tlxdmz-nfs2"

#

# Virtual IPs for each of the nodes specified above.

VIP_server1="10.X.X.181"

VIP_server2="10.X.X.182"


תודה,

חץ בן חמו

אתם מוזמנים לבקר בבלוג היעוץ או בבלוג הפרטי שלי

 

On Thu, Dec 21, 2017 at 3:47 PM, Renaud Fortier <Renaud.Fortier@xxxxxxxxxxxxxx> wrote:

Hi,
In your ganesha-ha.conf do you have your virtual ip adresses set something like this :

VIP_tlxdmz-nfs1="192.168.22.33"
VIP_tlxdmz-nfs2="192.168.22.34"

Renaud

De : gluster-users-bounces@xxxxxxxxxxx [mailto:gluster-users-bounces@xxxxxxxxxxx] De la part de Hetz Ben Hamo
Envoyé : 20 décembre 2017 04:35
À : gluster-users@xxxxxxxxxxx
Objet : [Gluster-users] glusterfs, ganesh, and pcs rules


Hi,

I've just created again the gluster with NFS ganesha. Glusterfs version 3.8

When I run the command  gluster nfs-ganesha enable - it returns a success. However, looking at the pcs status, I see this:

[root@tlxdmz-nfs1 ~]# pcs status
Cluster name: ganesha-nfs
Stack: corosync
Current DC: tlxdmz-nfs2 (version 1.1.16-12.el7_4.5-94ff4df) - partition with quorum
Last updated: Wed Dec 20 09:20:44 2017
Last change: Wed Dec 20 09:19:27 2017 by root via cibadmin on tlxdmz-nfs1

2 nodes configured
8 resources configured

Online: [ tlxdmz-nfs1 tlxdmz-nfs2 ]

Full list of resources:

 Clone Set: nfs_setup-clone [nfs_setup]
     Started: [ tlxdmz-nfs1 tlxdmz-nfs2 ]
 Clone Set: nfs-mon-clone [nfs-mon]
     Started: [ tlxdmz-nfs1 tlxdmz-nfs2 ]
 Clone Set: nfs-grace-clone [nfs-grace]
     Started: [ tlxdmz-nfs1 tlxdmz-nfs2 ]
 tlxdmz-nfs1-cluster_ip-1       (ocf::heartbeat:IPaddr):        Stopped
 tlxdmz-nfs2-cluster_ip-1       (ocf::heartbeat:IPaddr):        Stopped

Failed Actions:
* tlxdmz-nfs1-cluster_ip-1_monitor_0 on tlxdmz-nfs2 'not configured' (6): call=23, status=complete, exitreason='IP address (the ip parameter) is mandatory',
    last-rc-change='Wed Dec 20 09:19:28 2017', queued=0ms, exec=26ms
* tlxdmz-nfs2-cluster_ip-1_monitor_0 on tlxdmz-nfs2 'not configured' (6): call=27, status=complete, exitreason='IP address (the ip parameter) is mandatory',
    last-rc-change='Wed Dec 20 09:19:28 2017', queued=0ms, exec=26ms
* tlxdmz-nfs1-cluster_ip-1_monitor_0 on tlxdmz-nfs1 'not configured' (6): call=23, status=complete, exitreason='IP address (the ip parameter) is mandatory',
    last-rc-change='Wed Dec 20 09:19:28 2017', queued=0ms, exec=24ms
* tlxdmz-nfs2-cluster_ip-1_monitor_0 on tlxdmz-nfs1 'not configured' (6): call=27, status=complete, exitreason='IP address (the ip parameter) is mandatory',
    last-rc-change='Wed Dec 20 09:19:28 2017', queued=0ms, exec=61ms


Daemon Status:
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/enabled

Any suggestion how this can be fixed when enabling nfs-ganesha when invoking the above command or anything else that I can do to fixed the failed actions?

Thanks

 

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux