smth wrongs with glusterfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello, again
Smth wrong with my install gluster

OS:Debian
cat /etc/debian_version 7.6
Package: glusterfs-server
Versions:
3.2.7-3+deb7u1

Description:   I   have   3   servers   with   bricks  (192.168.1.1  -
node1,192.168.1.2 - node2, 192.168.1.3 - node3)
volume create by:
gluster volume create opennebula transport tcp node1:/data node2:/data node3:/data

192.168.1.4 - client

# volume info
gluster volume info

Volume Name: opennebula
Type: Replicate
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: node1:/data
Brick2: node2:/data
Brick3: node3:/data
Options Reconfigured:
server.allow-insecure: on


#peer info
gluster peer show
unrecognized word: show (position 1)
root@node1:/data# gluster peer status
Number of Peers: 2

Hostname: node3
Uuid: 355f676d-044c-453d-8e82-13b810c089bb
State: Peer in Cluster (Connected)

Hostname: node2
Uuid: bfed0b59-6b2f-474e-a3d7-18b0eb0b1c77
State: Peer in Cluster (Connected)


# on client i mounted volume by:
mount.glusterfs node1:/opennebula /var/lib/one/

ls -al /var/lib/one - show  files, but after 1 mins
ls -al /var/lib/one - hangs up


log

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[2014-12-05 13:28:53.290981] I [fuse-bridge.c:3461:fuse_graph_setup] 0-fuse: switched to graph 0
[2014-12-05 13:28:53.291223] I [fuse-bridge.c:3049:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.13 kernel 7.17
[2014-12-05 13:28:53.291800] I [afr-common.c:1522:afr_set_root_inode_on_first_lookup] 0-opennebula-replicate-0: added root inode
[2014-12-05 13:29:16.355469] C [client-handshake.c:121:rpc_client_ping_timer_expired] 0-opennebula-client-0: server 192.168.1.1:24009 has not responded in the last 42 seconds, disconnecting.
[2014-12-05 13:29:16.355684] E [rpc-clnt.c:341:saved_frames_unwind] (-->/usr/lib/libgfrpc.so.0(rpc_clnt_notify+0xb0) [0x7f7020ccec60] (-->/usr/lib/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x7e) [0x7f7020cce8fe] (-->/usr/lib/libgfrpc.so.0(saved_frames_destroy+0xe) [0x7f7020cce85e]))) 0-opennebula-client-0: forced unwinding frame type(GlusterFS 3.1) op(READDIRP(40)) called at 2014-12-05 13:27:10.345569
[2014-12-05 13:29:16.355754] E [client3_1-fops.c:1937:client3_1_readdirp_cbk] 0-opennebula-client-0: remote operation failed: Transport endpoint is not connected
[2014-12-05 13:29:16.355772] I [afr-self-heal-entry.c:1846:afr_sh_entry_impunge_readdir_cbk] 0-opennebula-replicate-0: readdir of / on subvolume opennebula-client-0 failed (Transport endpoint is not connected)
[2014-12-05 13:29:16.356073] I [socket.c:2275:socket_submit_request] 0-opennebula-client-0: not connected (priv->connected = 0)
[2014-12-05 13:29:16.356091] W [rpc-clnt.c:1417:rpc_clnt_submit] 0-opennebula-client-0: failed to submit rpc-request (XID: 0x112x Program: GlusterFS 3.1, ProgVers: 310, Proc: 33) to rpc-transport (opennebula-client-0)
[2014-12-05 13:29:16.356107] I [afr-self-heal-entry.c:129:afr_sh_entry_erase_pending_cbk] 0-opennebula-replicate-0: /: failed to erase pending xattrs on opennebula-client-0 (Transport endpoint is not connected)
[2014-12-05 13:29:16.356209] E [rpc-clnt.c:341:saved_frames_unwind] (-->/usr/lib/libgfrpc.so.0(rpc_clnt_notify+0xb0) [0x7f7020ccec60] (-->/usr/lib/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x7e) [0x7f7020cce8fe] (-->/usr/lib/libgfrpc.so.0(saved_frames_destroy+0xe) [0x7f7020cce85e]))) 0-opennebula-client-0: forced unwinding frame type(GlusterFS Handshake) op(PING(3)) called at 2014-12-05 13:27:52.348889
[2014-12-05 13:29:16.356227] W [client-handshake.c:264:client_ping_cbk] 0-opennebula-client-0: timer must have expired
[2014-12-05 13:29:16.356257] E [rpc-clnt.c:341:saved_frames_unwind] (-->/usr/lib/libgfrpc.so.0(rpc_clnt_notify+0xb0) [0x7f7020ccec60] (-->/usr/lib/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x7e) [0x7f7020cce8fe] (-->/usr/lib/libgfrpc.so.0(saved_frames_destroy+0xe) [0x7f7020cce85e]))) 0-opennebula-client-0: forced unwinding frame type(GlusterFS 3.1) op(STATFS(14)) called at 2014-12-05 13:28:20.214777
[2014-12-05 13:29:16.356274] I [client3_1-fops.c:637:client3_1_statfs_cbk] 0-opennebula-client-0: remote operation failed: Transport endpoint is not connected
[2014-12-05 13:29:16.356304] I [client.c:1883:client_rpc_notify] 0-opennebula-client-0: disconnected
[2014-12-05 13:29:16.356663] I [client-handshake.c:1090:select_server_supported_programs] 0-opennebula-client-0: Using Program GlusterFS 3.2.7, Num (1298437), Version (310)
[2014-12-05 13:29:16.356966] W [rpc-common.c:64:xdr_to_generic] (-->/usr/lib/libgfrpc.so.0(rpc_clnt_notify+0x85) [0x7f7020ccec35] (-->/usr/lib/libgfrpc.so.0(rpc_clnt_handle_reply+0xa5) [0x7f7020cce295] (-->/usr/lib/glusterfs/3.2.7/xlator/protocol/client.so(client3_1_entrylk_cbk+0x52) [0x7f701da44122]))) 0-xdr: XDR decoding failed
[2014-12-05 13:29:16.356993] E [client3_1-fops.c:1292:client3_1_entrylk_cbk] 0-opennebula-client-0: error
[2014-12-05 13:29:16.357015] E [client3_1-fops.c:1303:client3_1_entrylk_cbk] 0-opennebula-client-0: remote operation failed: Invalid argument
[2014-12-05 13:29:16.357036] I [afr-self-heal-common.c:2193:afr_self_heal_completion_cbk] 0-opennebula-replicate-0: background  entry self-heal completed on /
[2014-12-05 13:29:16.357229] I [client-handshake.c:913:client_setvolume_cbk] 0-opennebula-client-0: Connected to 192.168.1.1:24009, attached to remote volume '/data'.
[2014-12-05 13:29:16.357246] I [client-handshake.c:779:client_post_handshake] 0-opennebula-client-0: 2 fds open - Delaying child_up until they are re-opened
[2014-12-05 13:29:16.357617] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-opennebula-client-0: reopendir on / succeeded (fd = 0)
[2014-12-05 13:29:16.357651] I [client-handshake.c:536:client3_1_reopendir_cbk] 0-opennebula-client-0: reopendir on / succeeded (fd = 1)
[2014-12-05 13:29:16.357666] I [client-lk.c:617:decrement_reopen_fd_count] 0-opennebula-client-0: last fd open'd/lock-self-heal'd - notifying CHILD-UP
[2014-12-05 13:29:16.357681] I [client3_1-fops.c:2355:client_fdctx_destroy] 0-opennebula-client-0: sending releasedir on fd
[2014-12-05 13:29:16.377961] I [afr-common.c:1039:afr_launch_self_heal] 0-opennebula-replicate-0: background  entry self-heal triggered. path: /
[2014-12-05 13:29:16.378081] I [afr-common.c:1039:afr_launch_self_heal] 0-opennebula-replicate-0: background  entry self-heal triggered. path: /
[2014-12-05 13:29:16.378333] E [afr-self-heal-entry.c:2189:afr_sh_post_nonblocking_entry_cbk] 0-opennebula-replicate-0: Non Blocking entrylks failed for /.
[2014-12-05 13:29:16.378359] E [afr-self-heal-common.c:2190:afr_self_heal_completion_cbk] 0-opennebula-replicate-0: background  entry self-heal failed on /
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

and now .. df -h - also hangs up :(

node1
root@node1:/data# ps aux | grep gluster
root      2391  0.0  0.3  67676 18428 ?        Ssl  13:21   0:00 /usr/sbin/glusterd -p /var/run/glusterd.pid
root      2959  0.0  0.2 218924 14900 ?        Ssl  13:21   0:00 /usr/sbin/glusterfsd --xlator-option opennebula-server.listen-port=24009 -s localhost --volfile-id opennebula.node1.data -p /etc/glusterd/vols/opennebula/run/node1-data.pid -S /tmp/41ee3506b47079b17ab7acbda6b5b459.socket --brick-name /data --brick-port 24009 -l /var/log/glusterfs/bricks/data.log
root      2963  0.0  0.6 168580 41336 ?        Ssl  13:21   0:00 /usr/sbin/glusterfs -f /etc/glusterd/nfs/nfs-server.vol -p /etc/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log
root      3129  0.0  0.0   6304   596 pts/0    S+   13:38   0:00 grep gluster
root@node1:/data#


node2
root@node2:~#  ps aux | grep gluster
root      2335  0.0  0.2  67676 18424 ?        Ssl  13:20   0:00 /usr/sbin/glusterd -p /var/run/glusterd.pid
root      2961  0.0  0.1 149112 14656 ?        Ssl  13:20   0:00 /usr/sbin/glusterfsd --xlator-option opennebula-server.listen-port=24009 -s localhost --volfile-id opennebula.node2.data -p /etc/glusterd/vols/opennebula/run/node2-data.pid -S /tmp/191be92428c92005cd8acf75ec50fdb9.socket --brick-name /data --brick-port 24009 -l /var/log/glusterfs/bricks/data.log
root      2966  0.0  0.5 103044 41324 ?        Ssl  13:20   0:00 /usr/sbin/glusterfs -f /etc/glusterd/nfs/nfs-server.vol -p /etc/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log
root      3190  0.0  0.0   7832   880 pts/0    S+   13:38   0:00 grep gluster
root@node2:~#

root@node3:~#  ps aux | grep gluster
root      2394  0.0  0.2  67676 18428 ?        Ssl  11:50   0:00 /usr/sbin/glusterd -p /var/run/glusterd.pid
root      2964  0.0  0.1 149196 14648 ?        Ssl  11:51   0:00 /usr/sbin/glusterfsd --xlator-option opennebula-server.listen-port=24009 -s localhost --volfile-id opennebula.node3.data -p /etc/glusterd/vols/opennebula/run/node3-data.pid -S /tmp/14356b3499622409f8bfb31f38493f06.socket --brick-name /data --brick-port 24009 -l /var/log/glusterfs/bricks/data.log
root      2970  0.0  0.5 103044 41572 ?        Ssl  11:51   0:00 /usr/sbin/glusterfs -f /etc/glusterd/nfs/nfs-server.vol -p /etc/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log
root      3422  0.0  0.0   7832   880 pts/0    S+   13:38   0:00 grep gluster
root@node3:~#


Can you please help me to resolve it ?



-------------------------------------------------------
Старший Системный Администратор
Алексей Шалин
ОсОО "Хостер kg" - http://www.hoster.kg
ул. Ахунбаева 123 (здание БГТС)
help@xxxxxxxxx

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users





[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux