----- Original Message ----- > From: "Jeffrey Brewster" <jab2805@xxxxxxxxx> > To: "Ben Turner" <bturner@xxxxxxxxxx> > Cc: gluster-users@xxxxxxxxxxx > Sent: Tuesday, January 14, 2014 4:35:30 PM > Subject: Re: Unable to mount gfs gv0 volume Enterprise Linux Enterprise Linux Server release 5.6 > (Carthage) > > Hi Ben, > > > > > I don't have any "E" (error I assume) lines in the mnt.log file. I check all > the log files in the /var/log/glusterfs/ dir. I restarted glusterd to see > if I could see any errors. > Make sure SELinux is disabled and your firewall is open to allow gluster traffic. Have a look at: http://www.gluster.org/community/documentation/index.php/Basic_Gluster_Troubleshooting For what ports you need open. As a test I would just try disabling iptables and adding in the rules after you confirm it is working. -b > > > > > Data: > > > > Warnings from mount log: > ------------- > > # grep W mnt.log | cat -n > > > 1 [2014-01-14 > 19:32:22.920069] W [common-utils.c:2247:gf_get_reserved_ports] > 0-glusterfs: could not open the file > /proc/sys/net/ipv4/ip_local_reserved_ports for getting reserv > > 2 [2014-01-14 19:32:22.920108] W > [common-utils.c:2280:gf_process_reserved_ports] 0-glusterfs: Not able to > get reserved ports, hence there is a possibility that glusterfs may c > > 3 [2014-01-14 19:32:22.935611] W > [common-utils.c:2247:gf_get_reserved_ports] 0-glusterfs: could not open > the file /proc/sys/net/ipv4/ip_local_reserved_ports for getting reserv > > 4 [2014-01-14 19:32:22.935646] W > [common-utils.c:2280:gf_process_reserved_ports] 0-glusterfs: Not able to > get reserved ports, hence there is a possibility that glusterfs may c > > 5 [2014-01-14 19:32:22.938783] W > [common-utils.c:2247:gf_get_reserved_ports] 0-glusterfs: could not open > the file /proc/sys/net/ipv4/ip_local_reserved_ports for getting reserv > > 6 [2014-01-14 19:32:22.938826] W > [common-utils.c:2280:gf_process_reserved_ports] 0-glusterfs: Not able to > get reserved ports, hence there is a possibility that glusterfs may c > 7 [2014-01-14 19:32:22.941076] W [socket.c:514:__socket_rwv] > 0-gv0-client-1: readv failed (No data available) > > 8 [2014-01-14 19:32:22.945278] W > [common-utils.c:2247:gf_get_reserved_ports] 0-glusterfs: could not open > the file /proc/sys/net/ipv4/ip_local_reserved_ports for getting reserv > > 9 [2014-01-14 19:32:22.945312] W > [common-utils.c:2280:gf_process_reserved_ports] 0-glusterfs: Not able to > get reserved ports, hence there is a possibility that glusterfs may c > 10 [2014-01-14 > 19:32:22.946921] W [socket.c:514:__socket_rwv] 0-gv0-client-0: readv failed > (No data available) > > 11 [2014-01-14 19:32:22.953383] W > [common-utils.c:2247:gf_get_reserved_ports] 0-glusterfs: could not open > the file /proc/sys/net/ipv4/ip_local_reserved_ports for getting reserv > > 12 [2014-01-14 19:32:22.953423] W > [common-utils.c:2280:gf_process_reserved_ports] 0-glusterfs: Not able to > get reserved ports, hence there is a possibility that glusterfs may c > > 13 [2014-01-14 19:32:22.976633] W [glusterfsd.c:1002:cleanup_and_exit] > (-->/lib64/libc.so.6(clone+0x6d) [0x31f6ad40cd] > (-->/lib64/libpthread.so.0 [0x > > > > > > > > > > After restarting glusterd: > ----------------------------- > > > # grep E * | grep 21:25| cat -n > > > 1 etc-glusterfs-glusterd.vol.log:[2014-01-14 21:25:47.637082] E > [rpc-transport.c:253:rpc_transport_load] 0-rpc-transport: > /usr/lib64/glusterfs/3.4.2/rpc-transport/rdma.so: cannot open shared > object file: No such file or directory > 2 etc-glusterfs-glusterd.vol.log:[2014-01-14 21:25:49.940650] E > [glusterd-store.c:1858:glusterd_store_retrieve_volume] 0-: Unknown key: > brick-0 > 3 etc-glusterfs-glusterd.vol.log:[2014-01-14 21:25:49.940698] E > [glusterd-store.c:1858:glusterd_store_retrieve_volume] 0-: Unknown key: > brick-1 > 4 etc-glusterfs-glusterd.vol.log:[2014-01-14 21:25:52.075563] E > [glusterd-utils.c:3801:glusterd_nodesvc_unlink_socket_file] > 0-management: Failed to remove > /var/run/3096dde11d292c28c8c2f97101c272e8.socket error: Resource > temporarily unavailable > 5 etc-glusterfs-glusterd.vol.log:[2014-01-14 21:25:53.084722] E > [glusterd-utils.c:3801:glusterd_nodesvc_unlink_socket_file] > 0-management: Failed to remove > /var/run/15f2dcd004edbff6ab31364853d6b6b0.socket error: No such file or > directory > 6 glustershd.log:[2014-01-14 21:25:42.392401] W > [socket.c:1962:__socket_proto_state_machine] 0-glusterfs: reading from > socket failed. Error (No data available), peer (127.0.0.1:24007) > 7 glustershd.log:[2014-01-14 21:25:53.476026] E > [afr-self-heald.c:1067:afr_find_child_position] 0-gv0-replicate-0: > getxattr failed on gv0-client-0 - (Transport endpoint is not connected) > 8 nfs.log:[2014-01-14 21:25:42.391560] W > [socket.c:1962:__socket_proto_state_machine] 0-glusterfs: reading from > socket failed. Error (No data available), peer (127.0.0.1:24007) > > > > > Procs After restrt: > > > ps -ef | grep gluster > root 6345 1 0 18:35 ? 00:00:00 /usr/sbin/glusterfsd -s > gcvs4056 --volfile-id gv0.gcvs4056.data-gv0-brick1-app -p > /var/lib/glusterd/vols/gv0/run/gcvs4056-data-gv0-brick1-app.pid -S > /var/run/f2339d9fa145fd28662d8b970fbd4aab.socket --brick-name > /data/gv0/brick1/app -l /var/log/glusterfs/bricks/data-gv0-brick1-app.log > --xlator-option *-posix.glusterd-uuid=b1aae40a-78be-4303-bf48-49fb41d6bb30 > --brick-port 49153 --xlator-option gv0-server.listen-port=49153 > root 7240 1 0 21:25 ? 00:00:00 /usr/sbin/glusterd > --pid-file=/var/run/glusterd.pid > root 7266 1 0 21:25 ? 00:00:00 /usr/sbin/glusterfs -s > localhost --volfile-id gluster/nfs -p /var/lib/glusterd/nfs/run/nfs.pid -l > /var/log/glusterfs/nfs.log -S > /var/run/3096dde11d292c28c8c2f97101c272e8.socket > root 7273 1 0 21:25 ? 00:00:00 /usr/sbin/glusterfs -s > localhost --volfile-id gluster/glustershd -p > /var/lib/glusterd/glustershd/run/glustershd.pid -l > /var/log/glusterfs/glustershd.log -S > /var/run/15f2dcd004edbff6ab31364853d6b6b0.socket --xlator-option > *replicate*.node-uuid=b1aae40a-78be-4303-bf48-49fb41d6bb30 > root 7331 5375 0 21:34 pts/1 00:00:00 grep gluster > > > > > > > > On Tuesday, January 14, 2014 4:08 PM, Ben Turner <bturner@xxxxxxxxxx> wrote: > > ----- Original Message ----- > > From: "Jeffrey Brewster" <jab2805@xxxxxxxxx> > > To: "Ben Turner" <bturner@xxxxxxxxxx> > > Cc: gluster-users@xxxxxxxxxxx > > Sent: Tuesday, January 14, 2014 3:57:24 PM > > Subject: Re: Unable to mount gfs gv0 volume Enterprise > > Linux Enterprise Linux Server release 5.6 > > (Carthage) > > > > Thanks Ben, > > > > > > > > I tried that it still failed. > > As Vijay suggested have a look at /var/log/glusterfs, there should be a log > there with the mountpoint name that should give us a clue as to what is > going on. To note if there is a problem with FUSE not being loaded you will > see something like: > > [2013-01-12 01:58:22.213417] I [glusterfsd.c:1759:main] > 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version > 3.3.0.5rhs > [2013-01-12 01:58:22.213831] E [mount.c:596:gf_fuse_mount] 0-glusterfs-fuse: > cannot open /dev/fuse (No such file or directory) > [2013-01-12 01:58:22.213856] E [xlator.c:385:xlator_init] 0-fuse: > Initialization of volume 'fuse' failed, review your volfile again > > If you can't tell the problem from the log shoot out the relevant line and > I'll have a look. > > -b > > > > > > > > > > On Tuesday, January 14, 2014 3:22 PM, Ben Turner <bturner@xxxxxxxxxx> > > wrote: > > > > ----- Original Message ----- > > > From: "Jeffrey Brewster" <jab2805@xxxxxxxxx> > > > To: gluster-users@xxxxxxxxxxx > > > Sent: Tuesday, January 14, 2014 1:47:55 PM > > > Subject: Unable to mount gfs gv0 volume Enterprise Linux > > > Enterprise Linux Server release 5.6 > > > (Carthage) > > > > > > > > > > > > Hi all, > > > > > > I have been following the quick start guide as part of a POC. I created a > > > 10GB brick to be mounted. I'm unable to mount the volume. I don't see any > > > thing in the logs. has anyone had the same issues? I was thinking I need > > > to > > > install gluster-client but I don't see in the latest release rpms. > > > > > > Data: > > > =========== > > > > > > OS Version: > > > ------------ > > > > > > Description: Enterprise Linux Enterprise Linux Server release 5.6 > > > (Carthage > > > > > > > > > Installed packages on both servers > > > ------------ > > > > > > # rpm -qa | grep gluster | cat -n > > > 1 glusterfs-libs-3.4.2-1.el5 > > > 2 glusterfs-3.4.2-1.el5 > > > 3 glusterfs-cli-3.4.2-1.el5 > > > 4 glusterfs-geo-replication-3.4.2-1.el5 > > > 5 glusterfs-fuse-3.4.2-1.el5 > > > 6 glusterfs-server-3.4.2-1.el5 > > > > > > > > > gluster peer probe successful: > > > ----------- > > > peer probe: success: host gcvs0139 port 24007 already in peer list > > > > > > Gluster info: > > > --------- > > > gluster volume info | cat -n > > > 1 > > > 2 Volume Name: gv0 > > > 3 Type: Replicate > > > 4 Volume ID: 30a27041-ba1b-456f-b0bc-d8cdd2376c2f > > > 5 Status: Started > > > 6 Number of Bricks: 1 x 2 = 2 > > > 7 Transport-type: tcp > > > 8 Bricks: > > > 9 Brick1: gcvs0139:/data/gv0/brick1/app > > > 10 Brick2: gcvs4056:/data/gv0/brick1/app > > > > > > > > > Mount Failure: > > > ---------- > > > > > > > > > [root@gcvs4056 jbrewster]# mount -t glusterfs gcvs4056:/gv0 /mnt > > > Mount failed. Please check the log file for more details. > > > > > > > I bet you need to modprobe the fuse module, in el5 its not loaded by > > default. > > > > > > -b > > > > > > > > > > > _______________________________________________ > > > Gluster-users mailing list > > > Gluster-users@xxxxxxxxxxx > > > http://supercolony.gluster.org/mailman/listinfo/gluster-users _______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://supercolony.gluster.org/mailman/listinfo/gluster-users