Unable to access to gluster server

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I am able to start the gluster from a server without a problem

[2012-01-24 14:43:22.156731] I [glusterfsd.c:1493:main]
0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.2.5
[2012-01-24 14:43:22.171179] W [posix.c:4733:init] 0-brick: Posix
access control list is not supported.
[2012-01-24 14:43:22.171293] I [glusterd.c:550:init] 0-management:
Using /etc/glusterd as working directory
[2012-01-24 14:43:22.215941] C [rdma.c:3934:rdma_init]
0-rpc-transport/rdma: Failed to get IB devices
[2012-01-24 14:43:22.216052] E [rdma.c:4813:init] 0-rdma.management:
Failed to initialize IB Device
[2012-01-24 14:43:22.216069] E
[rpc-transport.c:742:rpc_transport_load] 0-rpc-transport: 'rdma'
initialization failed
[2012-01-24 14:43:22.216084] W [rpcsvc.c:1288:rpcsvc_transport_create]
0-rpc-service: cannot create listener, initing the transport failed
[2012-01-24 14:43:22.241155] I [glusterd.c:88:glusterd_uuid_init]
0-glusterd: retrieved UUID: 1d1cefb0-8917-4d15-9364-7f3999837762
Given volfile:
+------------------------------------------------------------------------------+
  1: volume management
  2:     type mgmt/glusterd
  3:     option working-directory /etc/glusterd
  4:     option transport-type socket,rdma
  5:     option transport.socket.keepalive-time 10
  6:     option transport.socket.keepalive-interval 2
  7: end-volume
  8:
  9: volume brick
 10:   type storage/posix                            # POSIX FS translator
 11:   option directory /home/user1/data/export      # Export this directory
 12: end-volume
 13:
 14: ### Add network serving capability to above brick.
 15: volume server
 16:   type protocol/server
 17:   option transport-type tcp
 18: option transport.socket.bind-address 192.168.1.103    # Default
is to listen on all interfaces
 19: option transport.socket.listen-port 6996              # Default is 6996
 20:
 21: # option client-volume-filename /etc/glusterfs/glusterfs-client.vol
 22:   subvolumes brick
 23: # NOTE: Access to any volume through protocol/server is denied by
 24: # default. You need to explicitly grant access through # "auth"
 25: # option.
 26:   option auth.addr.brick.allow * # Allow access to "brick" volume
 27: end-volume

+------------------------------------------------------------------------------+


Running gluster client with command `sudo glusterfs -l
logs/glustfs.log -f ./glusterfs-client.vol /tmp/server1`  has no error
as well


[2012-01-24 14:49:11.365068] I [glusterfsd.c:1493:main] 0-glusterfs:
Started running glusterfs version 3.2.5
[2012-01-24 14:49:11.413117] W [client.c:2276:init] 0-client: Volume
is dangling.
[2012-01-24 14:49:11.414060] W
[rpc-transport.c:447:validate_volume_options] 0-client: option
'transport.socket.remote-port' is deprecated, preferred is
'remote-port', continuing with correction
[2012-01-24 14:49:11.414095] W [graph.c:120:_log_if_option_is_invalid]
0-client: option 'remote-port' is not recognized
[2012-01-24 14:49:11.414108] I [client.c:1935:notify] 0-client: parent
translators are ready, attempting connect on transport
Given volfile:
+------------------------------------------------------------------------------+
  1: volume client
  2:   type protocol/client
  3:   option transport-type tcp     # for TCP/IP transport
  4: # option transport-type ib-sdp  # for Infiniband transport
  5:   option remote-host 192.168.1.103      # IP address of the remote brick
  6:   option transport.socket.remote-port 6996              # default
server port is 6996
  7:
  8: # option transport-type ib-verbs # for Infiniband verbs transport
  9: # option transport.ib-verbs.work-request-send-size  1048576
 10: # option transport.ib-verbs.work-request-send-count 16
 11: # option transport.ib-verbs.work-request-recv-size  1048576
 12: # option transport.ib-verbs.work-request-recv-count 16
 13: # option transport.ib-verbs.remote-port 6996              #
default server port is 6996
 14:
 15:   option remote-subvolume brick        # name of the remote volume
 16: # option transport-timeout 30          # default value is 120seconds
 17: end-volume

+------------------------------------------------------------------------------+
[2012-01-24 14:49:11.415468] I
[client-handshake.c:1090:select_server_supported_programs] 0-client:
Using Program GlusterFS 3.2.5, Num (1298437), Version (310)
[2012-01-24 14:49:11.416214] W [rpc-common.c:64:xdr_to_generic]
(-->/usr/lib/libgfrpc.so.0(rpc_clnt_handle_reply+0xce) [0xb776e13e]
(-->/usr/lib/glusterfs/3.2.5/xlator/protocol/client.so(client_setvolume_cbk+0x8a)
[0xb61a6cea] (-->/usr/lib/libgfxdr.so.0(xdr_to_setvolume_rsp+0x35)
[0xb7750945]))) 0-xdr: XDR decoding failed
[2012-01-24 14:49:11.416229] E
[client-handshake.c:827:client_setvolume_cbk] 0-client: XDR decoding
failed
[2012-01-24 14:49:11.416237] I
[client-handshake.c:933:client_setvolume_cbk] 0-client: sending
CHILD_CONNECTING event
[2012-01-24 14:49:11.417773] I [fuse-bridge.c:3339:fuse_graph_setup]
0-fuse: switched to graph 0
[2012-01-24 14:49:11.417864] I [fuse-bridge.c:2927:fuse_init]
0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.13
kernel 7.17


However, when testing to see if I can copy a data from client to
server with `cp test /tmp/server1`, the command seems to hang forever.

Any place I can check where it may go wrong?

Thanks.


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux