Re: After upgrade from 3.5 to 3.7 gluster local NFS is not starting on one of the servers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Small correction in the file I provided earlier. pmap_set returns 0 in case of failure.

On 09/16/2015 12:08 AM, Soumya Koduri wrote:
<code-snippet>
         /* pmap_set() returns 0 for FAIL and 1 for SUCCESS */
         if (!(pmap_set (newprog->prognum, newprog->progver, IPPROTO_TCP,
                         port))) {
                 gf_log (GF_RPCSVC, GF_LOG_ERROR, "Could not register with"
                         " portmap %d %d %u", newprog->prognum,
newprog->progver, port);
                 goto out;
         }
</code-snippet>

The error you got shows that portmap registration of mountd service
failed. You could start rpcbind in debug mode to print error messages on
the console
1) create/edit '/etc/sysconfig/rpcbind' with following contents.
#
# Optional arguments passed to rpcbind. See rpcbind(8)
RPCBIND_ARGS="-d"
2) restart rpcbind service. Now instead of starting in daemon mode,
rpcbind now prints syslog messages and waits on the console.
3) Now on another console, either restart glusterd or I have written
small c program to register mountd service with portmap (attached). You
could run it and look at below syslog messages printed by rpcbind services.

 >> PMAP_SET request for (100005, 3) : Checking caller's adress (port =
832)
succeeded

That's all I could think of. CCin Niels. He may be able to provide more
information on how to debug this issue.

Thanks,
Soumya


On 09/15/2015 05:27 PM, Yaroslav Molochko wrote:
I have two identical hosts managed by configuration managers, it was
working with 3.5 and stopped to work with 3.7 on ONE host. Okay, I've
done what you requested me, and here is result:
======================================
root@PSC01SERV008:~# systemctl restart rpcbind
root@PSC01SERV008:~# /etc/init.d/glusterfs-server restart
Restarting glusterfs-server (via systemctl): glusterfs-server.service.
root@PSC01SERV008:~# iptables -nvL
Another app is currently holding the xtables lock. Perhaps you want to
use the -w option?
root@PSC01SERV008:~# iptables -nvL
Chain INPUT (policy ACCEPT 3223K packets, 1760M bytes)
  pkts bytes target     prot opt in     out     source
destination

Chain FORWARD (policy ACCEPT 1478K packets, 1926M bytes)
  pkts bytes target     prot opt in     out     source
destination

Chain OUTPUT (policy ACCEPT 4697K packets, 1354M bytes)
  pkts bytes target     prot opt in     out     source
destination
root@PSC01SERV008:~# cat /etc/hosts.allow
# /etc/hosts.allow: list of hosts that are allowed to access the system.
#                   See the manual pages hosts_access(5) and
hosts_options(5).
#
# Example:    ALL: LOCAL @some_netgroup
#             ALL: .foobar.edu <http://foobar.edu> EXCEPT
terminalserver.foobar.edu <http://terminalserver.foobar.edu>
#
# If you're going to protect the portmapper use the name "rpcbind" for
the
# daemon name. See rpcbind(8) and rpc.mountd(8) for further information.
#
ALL: 127.0.0.1 : ALLOW
root@PSC01SERV008:~# gluster volume status
Status of volume: discover-music-prod-music-app-logs
Gluster process                             TCP Port  RDMA Port
Online  Pid
------------------------------------------------------------------------------

Brick 10.116.254.17:/srv/data/glusterfs/dis
cover-music-prod/music-app-logs             49152     0          Y
17125
Brick 10.116.254.18:/srv/data/glusterfs/dis
cover-music-prod/music-app-logs             49152     0          Y
24663
NFS Server on localhost                     N/A       N/A        N
N/A
Self-heal Daemon on localhost               N/A       N/A        Y
17693
NFS Server on 10.116.254.17                 2049      0          Y
17146
Self-heal Daemon on 10.116.254.17           N/A       N/A        Y
17151

Task Status of Volume discover-music-prod-music-app-logs
------------------------------------------------------------------------------

There are no active volume tasks

================================

For the protocol, I've reinstalled, restarted, anything I could, I've
checked anything I could find in the google and this doesn't work.
Please, lets move on with something more sophisticated than restart
glusterfs... I would not contact you if I had not tried to restart it
dozen of time.

Do you have any debugging to see what is really happening?


2015-09-15 1:55 GMT+08:00 Soumya Koduri <skoduri@xxxxxxxxxx
<mailto:skoduri@xxxxxxxxxx>>:

    Could you try
    * disabling iptables (& firewalld if enabled)
    * restart rpcbind service
    * restart glusterd

    If this doesn't work, (mentioned in one of the forums)
    Add below line in '/etc/hosts.allow' file.

             ALL: 127.0.0.1 : ALLOW

    Restart rpcbind and glusterd services.

    Thanks,
    Soumya


    On 09/14/2015 10:39 PM, Yaroslav Molochko wrote:

        Could not register with portmap




_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

#include <stdio.h>
#include <errno.h>
#include <rpc/pmap_clnt.h>

int main() {

	int ret = -1;

	ret = pmap_set (100005, 3, IPPROTO_TCP, 38465);
	if (!ret) {
		printf("pmap_set failed with errno(%d)\n", errno);
	} else
		printf("pmap_set is successful\n");

	return 0;
}
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux