On 03/21/2016 02:16 AM, ML Wong wrote:
Hello Soumya,
Thanks for answering my questions.
Question 1) I am still puzzled what VOL is still referring to. Is that a
variable/parameter that i can specify somewhere in the ganesha-ha.conf?
Any pointers will be very much appreciated.
No it doesn't refer to any volume as it is a global option. The log
message is misleading.
1) Those 3 test systems do not have firewalld running and SELinux
running. And i also verify corosync.conf is now empty.
# sestatus
SELinux status: disabled
# firewall-cmd --zone=public --list-all
FirewallD is not running
# ls -al /etc/corosync/corosync.conf
-rw-r--r-- 1 root root 0 Mar 20 12:54 /etc/corosync/corosync.conf
2) I also do not find pacemaker.log under /var/log, but i found the
following. Will these be the same:
# ls -al /var/log/pcsd/pcsd.log
-rw-r--r--. 1 root root 162322 Mar 20 13:26 /var/log/pcsd/pcsd.log
In any case, that log is full of the following:
+++
I, [2016-03-20T13:33:34.982311 #939] INFO -- : Running:
/usr/sbin/corosync-cmapctl totem.cluster_name
I, [2016-03-20T13:33:34.982459 #939] INFO -- : CIB USER: hacluster,
groups:
I, [2016-03-20T13:33:34.985984 #939] INFO -- : Return Value: 1
+++
There should be pacemaker.log too.. Which version of pacemaker are you
using?
3) /var/log/messages - it does not look ganesha passing the logs to this
file. But i see /var/log/ganesha.log - which i found out logging seem
to be sent to there from /etc/sysconfig/ganesha (OPTIONS="-L
/var/log/ganesha.log -f /etc/ganesha/ganesha.conf -N NIV_FULL_DEBUG""
After it failed to acquire the volume, the server will be filled with
the following in "ganesha.log", but the other 2 nodes in the cluster do
not have anything logged in ganesha.log. The other nodes have "E
[MSGID: 106062] [glusterd-op-sm.c:3728:glusterd_op_ac_unlock]
0-management: Unable to acquire volname" logged in the
"etc-glusterfs-glusterd.vol.log"
+++
20/03/2016 13:37:32 : epoch 56ef059d : mlw-fusion1 :
ganesha.nfsd-5215[dbus_heartbeat] gsh_dbus_thread :DBUS :F_DBG :top of
poll loop
20/03/2016 13:37:32 : epoch 56ef059d : mlw-fusion1 :
ganesha.nfsd-5215[dbus_heartbeat] gsh_dbus_thread :RW LOCK :F_DBG
:Acquired mutex 0x7fd38e3fe080 (&dbus_bcast_lock) at
/builddir/build/BUILD/nfs-ganesha-2.3.0/src/dbus/dbus_server.c:689
20/03/2016 13:37:32 : epoch 56ef059d : mlw-fusion1 :
ganesha.nfsd-5215[dbus_heartbeat] gsh_dbus_thread :RW LOCK :F_DBG
:Released mutex 0x7fd38e3fe080 (&dbus_bcast_lock) at
/builddir/build/BUILD/nfs-ganesha-2.3.0/src/dbus/dbus_server.c:739
+++
'/var/log/messages' is where cluster-setup related errors are logged.
Probably to debug you could try below steps -
* bring up nfs-ganesha server on all the nodes
#systemctl start nfs-ganesha
* Check if nfs-ganesha is successfully started
* On one of the nodes,
# cd '/usr/libexec/ganesha'
# bash -x ./ganesha.sh --setup /etc/ganesha
This would throw the errors returned by the script on the console during
cluster setup.
Please give a try and let me know if you see any errors.
Thanks,
Soumya
Testing Environment: Running CentOS Linux release 7.2.1511, glusterfs
3.7.8 (glusterfs-server-3.7.8-2.el7.x86_64),
nfs-ganesha-gluster-2.3.0-1.el7.x86_64
On Mon, Mar 14, 2016 at 2:05 AM, Soumya Koduri <skoduri@xxxxxxxxxx
<mailto:skoduri@xxxxxxxxxx>> wrote:
Hi,
On 03/14/2016 04:06 AM, ML Wong wrote:
Running CentOS Linux release 7.2.1511, glusterfs 3.7.8
(glusterfs-server-3.7.8-2.el7.x86_64),
nfs-ganesha-gluster-2.3.0-1.el7.x86_64
1) Ensured the connectivity between gluster nodes by using PING
2) Disabled NetworkManager (Loaded: loaded
(/usr/lib/systemd/system/NetworkManager.service; disabled)
3) Gluster 'gluster_shared_storage' is created by using (gluster
volume
set all cluster.enable-shared-storage enable), and are all
mounted under
/run/gluster/shared_storage, and nfs-ganesha directory is also
created
after the feature being enabled
4) Emtpy out /etc/ganesha/ganesha.conf (have tested ganesha
running as a
stand-alone NFS server)
5) Installed pacemaker, corosync, and resource-agents
6) Reset 'hacluster' system-user password to be the same:
# pcs cluster auth -u hacluster mlw-fusion1
mlw-fusion2 mlw-fusion3
Password:
mlw-fusion2: Authorized
mlw-fusion3: Authorized
mlw-fusion1: Authorized
7) IPv6 is enabled - (IPV6INIT=yes in
/etc/sysconfig/network-scripts/ifcfg-en*)
8) Started pcsd, and corosync
9) Created /var/lib/glusterd/nfs/secret.pem, and transfer to the
other 2
nodes
# ssh -i secret.pem root@mlw-fusion3 "echo helloworld"
helloworld
9) Transfer the following ganesha-ha.conf to the other nodes in the
cluster, but change the HA_VOL_SERVER value accordingly to
mlw-fusion2,
and mlw-fusion3
HA_NAME="ganesha-ha-01"
HA_VOL_SERVER="mlw-fusion1"
HA_CLUSTER_NODES="mlw-fusion1,mlw-fusion2,mlw-fusion3"
VIP_mlw_fusion1="192.168.30.201"
VIP_mlw_fusion2="192.168.30.202"
VIP_mlw_fusion3="192.168.30.203"
Question 1) As i am new to nfs-ganesha, pacemaker, corosync, i was
mostly puzzled by the error message found in the
'etc-glusterfs-glusterd.vol.log'. It seems like it will show the
below
message regardless of what i have done to troubleshoot - So,
what Volume
does these error messages are referring to?
Just a guess. Since this option is not tied to any particular
volume, it may have thrown (null) in the error message. Could you
check '/var/log/messages' and '/var/log/pacemaker.log' for the
errors/warnings. Since they are running RHEL 7, please check if
there are denials from selinux or firewalld.
Is that referring to the
HA_VOL_NAME in the /usr/libexec/ganesha/ganesha-ha.sh? Do I need to
change any of the 4 HA_* variables inside ganesha-ha.sh?
HA_NUM_SERVERS=0
HA_SERVERS=""
HA_CONFDIR="/etc/ganesha"
HA_VOL_NAME="gluster_shared_storage"
HA_VOL_MNT="/run/gluster/shared_storage"
No. You need not change any of these variables.
E [MSGID: 106123] [glusterd-syncop.c:1407:gd_commit_op_phase]
0-management: Commit of operation 'Volume (null)' failed on
localhost :
Failed to set up HA config for NFS-Ganesha. Please check the log
file
for details
Question 2) do I really have to start corosync before enabling
nfs-ganesha?
No. The setup automatically starts pacemaker and corosync services.
Thanks,
Soumya
Any help will be appreciated!!!
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx <mailto:Gluster-users@xxxxxxxxxxx>
http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users