Hello, I've been struggling to figure out a few issues that I've been having with my 3 node glusterfs setup. We have been experiencing an issue where one of the gluster nodes decides that it can't communicate with another node and it seems like it restarts glusterd which then causes glusterd to restart on the node it can't communicate with and in the end causes the 3rd node to loose all communication with any of the other 2 nodes which causes it to restart glusterd due to quorum being lost. Which means I end of with ovirt VM that is critical to our business to stop responding while all of this is taking place. I've checked with our network team and they can't seem to find any issues on the 10gbe switch these systems are all connected to for glusterfs communications so I'm at a lost as to whats causing this. Our setup is as follows.
san1 10.4.16.11 (10Gbe IP) These all have a 1Gb public facing interface with access to the web. san2 10.4.16.12 (10Gbe IP) These all have a 1Gb public facing interface with access to the web. san3 10.4.16.19 (10Gbe IP) These all have a 1Gb public facing interface with access to the web.
hv1-7 All communicating on the same 10Gbe switch to the glusterfs sans
The first log entry around the time of glusterfsd restart is on san2. [2018-07-11 19:16:09.130303] W [socket.c:593:__socket_rwv] 0-management: readv on 10.4.16.11:24007 failed (Connection timed out)
Followed by san1 [2018-07-11 19:16:09.169704] I [MSGID: 106004] [glusterd-handler.c:6317:__glusterd_peer_rpc_notify] 0-management: Peer <10.4.16.12>
Back on san2 [2018-07-11 19:16:09.172360] I [MSGID: 106004] [glusterd-handler.c:6317:__glusterd_peer_rpc_notify] 0-management: Peer <10.4.16.11> (<0f3090ee-080b-4a6b-9964-0ca86d801469>), in state <Peer in Cluster>,
has disconnected from glusterd.
on san1 [2018-07-11 19:16:09.194170] W [glusterd-locks.c:843:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0x2322a) [0x7f041e73a22a] -->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0x2d198)
[0x7f041e744198] -->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0xe4765) [0x7f041e7fb765] ) 0-management: Lock for vol EXPORTB not held
and it seems to spiral from there at around 19:16:12.488534 san3 chimes in about san2 connection failed. Then a few seconds later decides san1 is also not accessible causing it to also restart glusterd.
Mean while on one of the HV (one with the critical vm running on it) I see this log entry around the time this all starts.
rhev-data-center-mnt-glusterSD-10.4.16.11\:gv1.log
[2018-07-11 19:16:14.918389] W [socket.c:593:__socket_rwv] 0-gv1-client-2: readv on 10.4.16.19:49153 failed (No data available)
I've attached the full logs from the 19:16 time frame.
I need some help figuring out what could be causing this issue and what to check next.
Additonal information: Glusterfs v3.12.6-1 is running on all 3 sans. I had to stop patching them due to the fact that if I did so one at a time and rebooted and allowed them to completely heal before patching and rebooting the next would 100% cause a similar issue.
Glusterfs v3.12.11-1 is running on all 7 hv in the ovirt cloud.
# gluster volume info gv1
Volume Name: gv1
Type: Replicate
Volume ID: ea12f72d-a228-43ba-a360-4477cada292a
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.4.16.19:/glusterfs/data1/gv1
Brick2: 10.4.16.11:/glusterfs/data1/gv1
Brick3: 10.4.16.12:/glusterfs/data1/gv1
Options Reconfigured:
network.ping-timeout: 50
nfs.register-with-portmap: on
nfs.export-volumes: on
nfs.addr-namelookup: off
cluster.server-quorum-type: server
cluster.quorum-type: auto
network.remote-dio: enable
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
storage.owner-uid: 36
storage.owner-gid: 36
server.allow-insecure: on
nfs.disable: off
nfs.rpc-auth-allow: 10.4.16.*
auth.allow: 10.4.16.*
cluster.self-heal-daemon: enable
diagnostics.latency-measurement: on
diagnostics.count-fop-hits: on
cluster.server-quorum-ratio: 51%
Edward Clay
|
[2018-07-11 19:16:12.488634] W [socket.c:593:__socket_rwv] 0-management: readv on 10.4.16.12:24007 failed (Connection timed out) [2018-07-11 19:16:12.503577] I [MSGID: 106004] [glusterd-handler.c:6317:__glusterd_peer_rpc_notify] 0-management: Peer <10.4.16.12> (<dfe01058-5bea-4b67-8859-382a2c8854f4>), in state <Peer in Cluster>, has disconnected from glusterd. [2018-07-11 19:16:12.592253] W [glusterd-locks.c:843:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0x2322a) [0x7f60f376a22a] -->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0x2d198) [0x7f60f3774198] -->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0xe4765) [0x7f60f382b765] ) 0-management: Lock for vol EXPORTB not held [2018-07-11 19:16:12.602249] W [MSGID: 106118] [glusterd-handler.c:6342:__glusterd_peer_rpc_notify] 0-management: Lock not released for EXPORTB [2018-07-11 19:16:12.602448] W [glusterd-locks.c:843:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0x2322a) [0x7f60f376a22a] -->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0x2d198) [0x7f60f3774198] -->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0xe4765) [0x7f60f382b765] ) 0-management: Lock for vol gv1 not held [2018-07-11 19:16:12.602478] W [MSGID: 106118] [glusterd-handler.c:6342:__glusterd_peer_rpc_notify] 0-management: Lock not released for gv1 [2018-07-11 19:16:12.618832] W [socket.c:593:__socket_rwv] 0-management: readv on 10.4.16.11:24007 failed (Connection timed out) [2018-07-11 19:16:12.618887] I [MSGID: 106004] [glusterd-handler.c:6317:__glusterd_peer_rpc_notify] 0-management: Peer <10.4.16.11> (<0f3090ee-080b-4a6b-9964-0ca86d801469>), in state <Peer in Cluster>, has disconnected from glusterd. [2018-07-11 19:16:12.619010] W [glusterd-locks.c:843:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0x2322a) [0x7f60f376a22a] -->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0x2d198) [0x7f60f3774198] -->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0xe4765) [0x7f60f382b765] ) 0-management: Lock for vol EXPORTB not held [2018-07-11 19:16:12.619057] W [glusterd-locks.c:843:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0x2322a) [0x7f60f376a22a] -->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0x2d198) [0x7f60f3774198] -->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0xe4765) [0x7f60f382b765] ) 0-management: Lock for vol gv1 not held [2018-07-11 19:16:12.619152] C [MSGID: 106002] [glusterd-server-quorum.c:360:glusterd_do_volume_quorum_action] 0-management: Server quorum lost for volume gv1. Stopping local bricks. [2018-07-11 19:16:12.641346] I [MSGID: 106542] [glusterd-utils.c:8099:glusterd_brick_signal] 0-glusterd: sending signal 15 to brick with pid 2115 [2018-07-11 19:16:13.651358] I [MSGID: 106144] [glusterd-pmap.c:396:pmap_registry_remove] 0-pmap: removing brick /glusterfs/data1/gv1 on port 49153 [2018-07-11 19:16:13.661287] W [glusterd-handler.c:6064:__glusterd_brick_rpc_notify] 0-management: got disconnect from stale rpc on /glusterfs/data1/gv1 [2018-07-11 19:16:23.332759] W [rpc-clnt-ping.c:246:rpc_clnt_ping_cbk] 0-management: RPC_CLNT_PING notify failed [2018-07-11 19:16:23.664421] I [MSGID: 106163] [glusterd-handshake.c:1316:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 30600 [2018-07-11 19:16:23.666255] I [MSGID: 106490] [glusterd-handler.c:2540:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: 0f3090ee-080b-4a6b-9964-0ca86d801469 [2018-07-11 19:16:23.873295] I [MSGID: 106493] [glusterd-rpc-ops.c:486:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: dfe01058-5bea-4b67-8859-382a2c8854f4, host: 10.4.16.12, port: 0 [2018-07-11 19:16:23.877403] I [MSGID: 106493] [glusterd-handler.c:3800:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to 10.4.16.11 (0), ret: 0, op_ret: 0 [2018-07-11 19:16:23.891916] C [MSGID: 106003] [glusterd-server-quorum.c:354:glusterd_do_volume_quorum_action] 0-management: Server quorum regained for volume gv1. Starting local bricks. [2018-07-11 19:16:23.892149] I [glusterd-utils.c:5941:glusterd_brick_start] 0-management: starting a fresh brick process for brick /glusterfs/data1/gv1 [2018-07-11 19:16:23.896812] I [MSGID: 106144] [glusterd-pmap.c:396:pmap_registry_remove] 0-pmap: removing brick /glusterfs/data1/gv1 on port 49153 [2018-07-11 19:16:23.902276] I [rpc-clnt.c:1044:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2018-07-11 19:16:23.931403] I [MSGID: 106492] [glusterd-handler.c:2718:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: dfe01058-5bea-4b67-8859-382a2c8854f4 [2018-07-11 19:16:23.932357] I [MSGID: 106502] [glusterd-handler.c:2763:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2018-07-11 19:16:23.940052] I [MSGID: 106568] [glusterd-proc-mgmt.c:87:glusterd_proc_stop] 0-management: Stopping nfs daemon running in pid: 25885 [2018-07-11 19:16:24.940457] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: nfs service is stopped [2018-07-11 19:16:24.965434] I [MSGID: 106540] [glusterd-utils.c:4939:glusterd_nfs_pmap_deregister] 0-glusterd: De-registered MOUNTV3 successfully [2018-07-11 19:16:24.966193] I [MSGID: 106540] [glusterd-utils.c:4948:glusterd_nfs_pmap_deregister] 0-glusterd: De-registered MOUNTV1 successfully [2018-07-11 19:16:24.966780] I [MSGID: 106540] [glusterd-utils.c:4957:glusterd_nfs_pmap_deregister] 0-glusterd: De-registered NFSV3 successfully [2018-07-11 19:16:24.967380] I [MSGID: 106540] [glusterd-utils.c:4966:glusterd_nfs_pmap_deregister] 0-glusterd: De-registered NLM v4 successfully [2018-07-11 19:16:24.968005] I [MSGID: 106540] [glusterd-utils.c:4975:glusterd_nfs_pmap_deregister] 0-glusterd: De-registered NLM v1 successfully [2018-07-11 19:16:24.968558] I [MSGID: 106540] [glusterd-utils.c:4984:glusterd_nfs_pmap_deregister] 0-glusterd: De-registered ACL v3 successfully [2018-07-11 19:16:24.992325] I [MSGID: 106567] [glusterd-svc-mgmt.c:197:glusterd_svc_start] 0-management: Starting nfs service [2018-07-11 19:16:24.998379] I [MSGID: 106568] [glusterd-proc-mgmt.c:87:glusterd_proc_stop] 0-management: Stopping glustershd daemon running in pid: 25898 [2018-07-11 19:16:25.999008] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: glustershd service is stopped [2018-07-11 19:16:25.999203] I [MSGID: 106567] [glusterd-svc-mgmt.c:197:glusterd_svc_start] 0-management: Starting glustershd service [2018-07-11 19:16:26.002669] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad already stopped [2018-07-11 19:16:26.002762] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: quotad service is stopped [2018-07-11 19:16:26.002942] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2018-07-11 19:16:26.002984] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: bitd service is stopped [2018-07-11 19:16:26.003142] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2018-07-11 19:16:26.003179] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: scrub service is stopped [2018-07-11 19:16:26.011122] I [MSGID: 106492] [glusterd-handler.c:2718:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 0f3090ee-080b-4a6b-9964-0ca86d801469 [2018-07-11 19:16:26.012277] I [MSGID: 106502] [glusterd-handler.c:2763:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2018-07-11 19:16:26.013303] I [socket.c:2474:socket_event_handler] 0-transport: EPOLLERR - disconnecting now [2018-07-11 19:16:26.014041] I [MSGID: 106005] [glusterd-handler.c:6071:__glusterd_brick_rpc_notify] 0-management: Brick 10.4.16.19:/glusterfs/data1/gv1 has disconnected from glusterd. [2018-07-11 19:16:26.014710] I [MSGID: 106493] [glusterd-rpc-ops.c:701:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: dfe01058-5bea-4b67-8859-382a2c8854f4 [2018-07-11 19:16:26.023253] I [MSGID: 106163] [glusterd-handshake.c:1316:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 30600 [2018-07-11 19:16:26.025365] I [MSGID: 106490] [glusterd-handler.c:2540:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: dfe01058-5bea-4b67-8859-382a2c8854f4 [2018-07-11 19:16:26.041670] I [MSGID: 106493] [glusterd-handler.c:3800:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to 10.4.16.12 (0), ret: 0, op_ret: 0 [2018-07-11 19:16:26.044042] I [MSGID: 106492] [glusterd-handler.c:2718:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: dfe01058-5bea-4b67-8859-382a2c8854f4 [2018-07-11 19:16:26.044765] I [MSGID: 106502] [glusterd-handler.c:2763:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2018-07-11 19:16:26.045026] I [MSGID: 106493] [glusterd-rpc-ops.c:701:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: dfe01058-5bea-4b67-8859-382a2c8854f4 [2018-07-11 19:16:26.193249] I [MSGID: 106143] [glusterd-pmap.c:295:pmap_registry_bind] 0-pmap: adding brick /glusterfs/data1/gv1 on port 49152 [2018-07-11 19:16:26.941324] W [rpc-clnt-ping.c:246:rpc_clnt_ping_cbk] 0-management: RPC_CLNT_PING notify failed [2018-07-11 19:16:26.958973] I [MSGID: 106493] [glusterd-rpc-ops.c:486:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: 0f3090ee-080b-4a6b-9964-0ca86d801469, host: 10.4.16.11, port: 0 [2018-07-11 19:16:26.961398] I [glusterd-utils.c:5847:glusterd_brick_start] 0-management: discovered already-running brick /glusterfs/data1/gv1 [2018-07-11 19:16:26.961437] I [MSGID: 106143] [glusterd-pmap.c:295:pmap_registry_bind] 0-pmap: adding brick /glusterfs/data1/gv1 on port 49152 [2018-07-11 19:16:26.961628] I [MSGID: 106492] [glusterd-handler.c:2718:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 0f3090ee-080b-4a6b-9964-0ca86d801469 [2018-07-11 19:16:26.962590] I [MSGID: 106502] [glusterd-handler.c:2763:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2018-07-11 19:16:26.962901] I [MSGID: 106568] [glusterd-proc-mgmt.c:87:glusterd_proc_stop] 0-management: Stopping nfs daemon running in pid: 5954 [2018-07-11 19:16:27.963233] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: nfs service is stopped [2018-07-11 19:16:27.964124] I [MSGID: 106540] [glusterd-utils.c:4939:glusterd_nfs_pmap_deregister] 0-glusterd: De-registered MOUNTV3 successfully [2018-07-11 19:16:27.964691] I [MSGID: 106540] [glusterd-utils.c:4948:glusterd_nfs_pmap_deregister] 0-glusterd: De-registered MOUNTV1 successfully [2018-07-11 19:16:27.965331] I [MSGID: 106540] [glusterd-utils.c:4957:glusterd_nfs_pmap_deregister] 0-glusterd: De-registered NFSV3 successfully [2018-07-11 19:16:27.965974] I [MSGID: 106540] [glusterd-utils.c:4966:glusterd_nfs_pmap_deregister] 0-glusterd: De-registered NLM v4 successfully [2018-07-11 19:16:27.966587] I [MSGID: 106540] [glusterd-utils.c:4975:glusterd_nfs_pmap_deregister] 0-glusterd: De-registered NLM v1 successfully [2018-07-11 19:16:27.967239] I [MSGID: 106540] [glusterd-utils.c:4984:glusterd_nfs_pmap_deregister] 0-glusterd: De-registered ACL v3 successfully [2018-07-11 19:16:27.971364] I [MSGID: 106567] [glusterd-svc-mgmt.c:197:glusterd_svc_start] 0-management: Starting nfs service [2018-07-11 19:16:28.977134] I [MSGID: 106568] [glusterd-proc-mgmt.c:87:glusterd_proc_stop] 0-management: Stopping glustershd daemon running in pid: 5963 [2018-07-11 19:16:29.977492] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: glustershd service is stopped [2018-07-11 19:16:29.977596] I [MSGID: 106567] [glusterd-svc-mgmt.c:197:glusterd_svc_start] 0-management: Starting glustershd service [2018-07-11 19:16:29.981043] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad already stopped [2018-07-11 19:16:29.981116] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: quotad service is stopped [2018-07-11 19:16:29.981275] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2018-07-11 19:16:29.981326] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: bitd service is stopped [2018-07-11 19:16:29.981481] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2018-07-11 19:16:29.981514] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: scrub service is stopped [2018-07-11 19:16:29.989084] I [MSGID: 106493] [glusterd-rpc-ops.c:701:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 0f3090ee-080b-4a6b-9964-0ca86d801469 [2018-07-11 19:16:12.619036] W [MSGID: 106118] [glusterd-handler.c:6342:__glusterd_peer_rpc_notify] 0-management: Lock not released for EXPORTB [2018-07-11 19:16:12.619077] W [MSGID: 106118] [glusterd-handler.c:6342:__glusterd_peer_rpc_notify] 0-management: Lock not released for gv1 [2018-07-11 20:31:06.707428] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad already stopped [2018-07-11 20:31:06.707494] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: quotad service is stopped [2018-07-11 20:31:06.707798] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2018-07-11 20:31:06.707841] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: bitd service is stopped [2018-07-11 20:31:06.708122] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2018-07-11 20:31:06.708158] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: scrub service is stopped [2018-07-11 20:31:06.757167] I [run.c:190:runner_log] (-->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0xdef0a) [0x7f60f3825f0a] -->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0xde9cd) [0x7f60f38259cd] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7f60fed25e05] ) 0-management: Ran script: /var/lib/glusterd/hooks/1/set/post/S30samba-set.sh --volname=gv1 -o network.ping-timeout=75 --gd-workdir=/var/lib/glusterd [2018-07-11 20:31:06.779545] I [run.c:190:runner_log] (-->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0xdef0a) [0x7f60f3825f0a] -->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0xde9cd) [0x7f60f38259cd] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7f60fed25e05] ) 0-management: Ran script: /var/lib/glusterd/hooks/1/set/post/S32gluster_enable_shared_storage.sh --volname=gv1 -o network.ping-timeout=75 --gd-workdir=/var/lib/glusterd [2018-07-11 20:31:29.645512] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: bitd service is stopped [2018-07-11 20:31:29.645769] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2018-07-11 20:32:00.557947] I [MSGID: 106488] [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume] 0-management: Received get vol req [2018-07-11 20:37:41.348320] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume gv1 [2018-07-11 20:37:41.350482] W [rpc-clnt-ping.c:246:rpc_clnt_ping_cbk] 0-management: RPC_CLNT_PING notify failed [2018-07-11 20:37:41.351249] W [rpc-clnt-ping.c:246:rpc_clnt_ping_cbk] 0-management: RPC_CLNT_PING notify failed [2018-07-11 20:45:09.131629] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad already stopped [2018-07-11 20:45:09.131673] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: quotad service is stopped [2018-07-11 20:45:09.131916] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2018-07-11 20:45:09.131949] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: bitd service is stopped [2018-07-11 20:45:09.132180] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2018-07-11 20:45:09.132210] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: scrub service is stopped [2018-07-11 20:45:09.150888] I [run.c:190:runner_log] (-->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0xdef0a) [0x7f60f3825f0a] -->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0xde9cd) [0x7f60f38259cd] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7f60fed25e05] ) 0-management: Ran script: /var/lib/glusterd/hooks/1/set/post/S30samba-set.sh --volname=gv1 -o network.ping-timeout=50 --gd-workdir=/var/lib/glusterd [2018-07-11 20:45:09.160539] I [run.c:190:runner_log] (-->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0xdef0a) [0x7f60f3825f0a] -->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0xde9cd) [0x7f60f38259cd] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7f60fed25e05] ) 0-management: Ran script: /var/lib/glusterd/hooks/1/set/post/S32gluster_enable_shared_storage.sh --volname=gv1 -o network.ping-timeout=50 --gd-workdir=/var/lib/glusterd [2018-07-11 20:45:28.971567] I [MSGID: 106488] [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume] 0-management: Received get vol req
[2018-07-11 19:16:09.130303] W [socket.c:593:__socket_rwv] 0-management: readv on 10.4.16.11:24007 failed (Connection timed out) [2018-07-11 19:16:09.172360] I [MSGID: 106004] [glusterd-handler.c:6317:__glusterd_peer_rpc_notify] 0-management: Peer <10.4.16.11> (<0f3090ee-080b-4a6b-9964-0ca86d801469>), in state <Peer in Cluster>, has disconnected from glusterd. [2018-07-11 19:16:09.220456] W [glusterd-locks.c:843:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0x2322a) [0x7f484a96722a] -->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0x2d198) [0x7f484a971198] -->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0xe4765) [0x7f484aa28765] ) 0-management: Lock for vol EXPORTB not held [2018-07-11 19:16:09.220503] W [MSGID: 106118] [glusterd-handler.c:6342:__glusterd_peer_rpc_notify] 0-management: Lock not released for EXPORTB [2018-07-11 19:16:09.220564] W [glusterd-locks.c:843:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0x2322a) [0x7f484a96722a] -->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0x2d198) [0x7f484a971198] -->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0xe4765) [0x7f484aa28765] ) 0-management: Lock for vol gv1 not held [2018-07-11 19:16:09.220586] W [MSGID: 106118] [glusterd-handler.c:6342:__glusterd_peer_rpc_notify] 0-management: Lock not released for gv1 [2018-07-11 19:16:12.469550] I [MSGID: 106004] [glusterd-handler.c:6317:__glusterd_peer_rpc_notify] 0-management: Peer <10.4.16.19> (<238af98a-d2f1-491d-a1f1-64ace4eb6d3d>), in state <Peer in Cluster>, has disconnected from glusterd. [2018-07-11 19:16:12.469686] W [glusterd-locks.c:843:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0x2322a) [0x7f484a96722a] -->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0x2d198) [0x7f484a971198] -->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0xe4765) [0x7f484aa28765] ) 0-management: Lock for vol EXPORTB not held [2018-07-11 19:16:12.469868] W [glusterd-locks.c:843:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0x2322a) [0x7f484a96722a] -->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0x2d198) [0x7f484a971198] -->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0xe4765) [0x7f484aa28765] ) 0-management: Lock for vol gv1 not held [2018-07-11 19:16:12.469921] C [MSGID: 106002] [glusterd-server-quorum.c:360:glusterd_do_volume_quorum_action] 0-management: Server quorum lost for volume gv1. Stopping local bricks. [2018-07-11 19:16:12.475479] I [MSGID: 106542] [glusterd-utils.c:8099:glusterd_brick_signal] 0-glusterd: sending signal 15 to brick with pid 9434 [2018-07-11 19:16:13.487144] I [MSGID: 106144] [glusterd-pmap.c:396:pmap_registry_remove] 0-pmap: removing brick /glusterfs/data1/gv1 on port 49152 [2018-07-11 19:16:13.524249] W [glusterd-handler.c:6064:__glusterd_brick_rpc_notify] 0-management: got disconnect from stale rpc on /glusterfs/data1/gv1 [2018-07-11 19:16:19.663375] I [MSGID: 106163] [glusterd-handshake.c:1316:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 30600 [2018-07-11 19:16:19.717922] I [MSGID: 106490] [glusterd-handler.c:2540:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: 0f3090ee-080b-4a6b-9964-0ca86d801469 [2018-07-11 19:16:19.876067] I [MSGID: 106493] [glusterd-handler.c:3800:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to 10.4.16.11 (0), ret: 0, op_ret: 0 [2018-07-11 19:16:19.893659] C [MSGID: 106003] [glusterd-server-quorum.c:354:glusterd_do_volume_quorum_action] 0-management: Server quorum regained for volume gv1. Starting local bricks. [2018-07-11 19:16:19.893834] I [glusterd-utils.c:5941:glusterd_brick_start] 0-management: starting a fresh brick process for brick /glusterfs/data1/gv1 [2018-07-11 19:16:19.909548] I [MSGID: 106144] [glusterd-pmap.c:396:pmap_registry_remove] 0-pmap: removing brick /glusterfs/data1/gv1 on port 49152 [2018-07-11 19:16:19.922312] I [rpc-clnt.c:1044:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2018-07-11 19:16:20.044653] I [MSGID: 106492] [glusterd-handler.c:2718:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 0f3090ee-080b-4a6b-9964-0ca86d801469 [2018-07-11 19:16:20.044715] I [MSGID: 106502] [glusterd-handler.c:2763:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2018-07-11 19:16:20.051397] I [socket.c:2474:socket_event_handler] 0-transport: EPOLLERR - disconnecting now [2018-07-11 19:16:20.052123] I [MSGID: 106005] [glusterd-handler.c:6071:__glusterd_brick_rpc_notify] 0-management: Brick 10.4.16.12:/glusterfs/data1/gv1 has disconnected from glusterd. [2018-07-11 19:16:20.063944] I [MSGID: 106568] [glusterd-proc-mgmt.c:87:glusterd_proc_stop] 0-management: Stopping nfs daemon running in pid: 9362 [2018-07-11 19:16:21.064239] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: nfs service is stopped [2018-07-11 19:16:21.158747] I [MSGID: 106540] [glusterd-utils.c:4939:glusterd_nfs_pmap_deregister] 0-glusterd: De-registered MOUNTV3 successfully [2018-07-11 19:16:21.159539] I [MSGID: 106540] [glusterd-utils.c:4948:glusterd_nfs_pmap_deregister] 0-glusterd: De-registered MOUNTV1 successfully [2018-07-11 19:16:21.160119] I [MSGID: 106540] [glusterd-utils.c:4957:glusterd_nfs_pmap_deregister] 0-glusterd: De-registered NFSV3 successfully [2018-07-11 19:16:21.160708] I [MSGID: 106540] [glusterd-utils.c:4966:glusterd_nfs_pmap_deregister] 0-glusterd: De-registered NLM v4 successfully [2018-07-11 19:16:21.161258] I [MSGID: 106540] [glusterd-utils.c:4975:glusterd_nfs_pmap_deregister] 0-glusterd: De-registered NLM v1 successfully [2018-07-11 19:16:21.161837] I [MSGID: 106540] [glusterd-utils.c:4984:glusterd_nfs_pmap_deregister] 0-glusterd: De-registered ACL v3 successfully [2018-07-11 19:16:21.170399] I [MSGID: 106567] [glusterd-svc-mgmt.c:197:glusterd_svc_start] 0-management: Starting nfs service [2018-07-11 19:16:21.185272] I [MSGID: 106568] [glusterd-proc-mgmt.c:87:glusterd_proc_stop] 0-management: Stopping glustershd daemon running in pid: 9371 [2018-07-11 19:16:22.185617] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: glustershd service is stopped [2018-07-11 19:16:22.185736] I [MSGID: 106567] [glusterd-svc-mgmt.c:197:glusterd_svc_start] 0-management: Starting glustershd service [2018-07-11 19:16:22.189156] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad already stopped [2018-07-11 19:16:22.189237] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: quotad service is stopped [2018-07-11 19:16:22.189502] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2018-07-11 19:16:22.189555] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: bitd service is stopped [2018-07-11 19:16:22.189748] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2018-07-11 19:16:22.189796] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: scrub service is stopped [2018-07-11 19:16:22.189909] I [glusterd-utils.c:5847:glusterd_brick_start] 0-management: discovered already-running brick /glusterfs/data1/gv1 [2018-07-11 19:16:22.189937] I [MSGID: 106143] [glusterd-pmap.c:295:pmap_registry_bind] 0-pmap: adding brick /glusterfs/data1/gv1 on port 49154 [2018-07-11 19:16:22.199534] I [MSGID: 106493] [glusterd-rpc-ops.c:701:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 0f3090ee-080b-4a6b-9964-0ca86d801469 [2018-07-11 19:16:22.272265] I [MSGID: 106493] [glusterd-rpc-ops.c:486:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: 0f3090ee-080b-4a6b-9964-0ca86d801469, host: 10.4.16.11, port: 0 [2018-07-11 19:16:22.273749] I [MSGID: 106492] [glusterd-handler.c:2718:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 0f3090ee-080b-4a6b-9964-0ca86d801469 [2018-07-11 19:16:22.273798] I [MSGID: 106502] [glusterd-handler.c:2763:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2018-07-11 19:16:22.275756] I [MSGID: 106493] [glusterd-rpc-ops.c:701:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 0f3090ee-080b-4a6b-9964-0ca86d801469 [2018-07-11 19:16:22.387758] I [MSGID: 106143] [glusterd-pmap.c:295:pmap_registry_bind] 0-pmap: adding brick /glusterfs/data1/gv1 on port 49154 [2018-07-11 19:16:23.338002] I [MSGID: 106163] [glusterd-handshake.c:1316:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 30600 [2018-07-11 19:16:23.362587] I [MSGID: 106490] [glusterd-handler.c:2540:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: 238af98a-d2f1-491d-a1f1-64ace4eb6d3d [2018-07-11 19:16:23.878170] I [MSGID: 106493] [glusterd-handler.c:3800:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to 10.4.16.19 (0), ret: 0, op_ret: 0 [2018-07-11 19:16:23.882342] I [MSGID: 106568] [glusterd-proc-mgmt.c:87:glusterd_proc_stop] 0-management: Stopping nfs daemon running in pid: 18910 [2018-07-11 19:16:24.882677] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: nfs service is stopped [2018-07-11 19:16:24.883602] I [MSGID: 106540] [glusterd-utils.c:4939:glusterd_nfs_pmap_deregister] 0-glusterd: De-registered MOUNTV3 successfully [2018-07-11 19:16:24.884274] I [MSGID: 106540] [glusterd-utils.c:4948:glusterd_nfs_pmap_deregister] 0-glusterd: De-registered MOUNTV1 successfully [2018-07-11 19:16:24.884940] I [MSGID: 106540] [glusterd-utils.c:4957:glusterd_nfs_pmap_deregister] 0-glusterd: De-registered NFSV3 successfully [2018-07-11 19:16:24.885549] I [MSGID: 106540] [glusterd-utils.c:4966:glusterd_nfs_pmap_deregister] 0-glusterd: De-registered NLM v4 successfully [2018-07-11 19:16:24.886153] I [MSGID: 106540] [glusterd-utils.c:4975:glusterd_nfs_pmap_deregister] 0-glusterd: De-registered NLM v1 successfully [2018-07-11 19:16:24.886811] I [MSGID: 106540] [glusterd-utils.c:4984:glusterd_nfs_pmap_deregister] 0-glusterd: De-registered ACL v3 successfully [2018-07-11 19:16:24.891147] I [MSGID: 106567] [glusterd-svc-mgmt.c:197:glusterd_svc_start] 0-management: Starting nfs service [2018-07-11 19:16:24.896653] I [MSGID: 106568] [glusterd-proc-mgmt.c:87:glusterd_proc_stop] 0-management: Stopping glustershd daemon running in pid: 18919 [2018-07-11 19:16:25.896987] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: glustershd service is stopped [2018-07-11 19:16:25.897100] I [MSGID: 106567] [glusterd-svc-mgmt.c:197:glusterd_svc_start] 0-management: Starting glustershd service [2018-07-11 19:16:25.900554] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad already stopped [2018-07-11 19:16:25.900627] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: quotad service is stopped [2018-07-11 19:16:25.900790] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2018-07-11 19:16:25.900825] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: bitd service is stopped [2018-07-11 19:16:25.900977] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2018-07-11 19:16:25.901011] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: scrub service is stopped [2018-07-11 19:16:25.910447] I [MSGID: 106492] [glusterd-handler.c:2718:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 238af98a-d2f1-491d-a1f1-64ace4eb6d3d [2018-07-11 19:16:25.910523] I [MSGID: 106502] [glusterd-handler.c:2763:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2018-07-11 19:16:25.911884] I [MSGID: 106493] [glusterd-rpc-ops.c:701:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 238af98a-d2f1-491d-a1f1-64ace4eb6d3d [2018-07-11 19:16:26.046753] I [MSGID: 106493] [glusterd-rpc-ops.c:486:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: 238af98a-d2f1-491d-a1f1-64ace4eb6d3d, host: 10.4.16.19, port: 0 [2018-07-11 19:16:26.047959] I [MSGID: 106492] [glusterd-handler.c:2718:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 238af98a-d2f1-491d-a1f1-64ace4eb6d3d [2018-07-11 19:16:26.048015] I [MSGID: 106502] [glusterd-handler.c:2763:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2018-07-11 19:16:26.049879] I [MSGID: 106493] [glusterd-rpc-ops.c:701:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 238af98a-d2f1-491d-a1f1-64ace4eb6d3d [2018-07-11 19:16:12.469838] W [MSGID: 106118] [glusterd-handler.c:6342:__glusterd_peer_rpc_notify] 0-management: Lock not released for EXPORTB [2018-07-11 19:16:12.469899] W [MSGID: 106118] [glusterd-handler.c:6342:__glusterd_peer_rpc_notify] 0-management: Lock not released for gv1 [2018-07-11 20:30:31.917595] I [MSGID: 106488] [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume] 0-management: Received get vol req [2018-07-11 20:31:06.623066] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad already stopped [2018-07-11 20:31:06.623116] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: quotad service is stopped [2018-07-11 20:31:06.623413] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2018-07-11 20:31:06.623450] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: bitd service is stopped [2018-07-11 20:31:06.623720] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2018-07-11 20:31:06.623753] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: scrub service is stopped [2018-07-11 20:31:06.682776] I [run.c:190:runner_log] (-->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0xdef0a) [0x7f484aa22f0a] -->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0xde9cd) [0x7f484aa229cd] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7f4855f22e05] ) 0-management: Ran script: /var/lib/glusterd/hooks/1/set/post/S30samba-set.sh --volname=gv1 -o network.ping-timeout=75 --gd-workdir=/var/lib/glusterd [2018-07-11 20:31:06.697771] I [run.c:190:runner_log] (-->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0xdef0a) [0x7f484aa22f0a] -->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0xde9cd) [0x7f484aa229cd] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7f4855f22e05] ) 0-management: Ran script: /var/lib/glusterd/hooks/1/set/post/S32gluster_enable_shared_storage.sh --volname=gv1 -o network.ping-timeout=75 --gd-workdir=/var/lib/glusterd [2018-07-11 20:31:29.651791] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2018-07-11 20:31:29.651803] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: bitd service is stopped [2018-07-11 20:31:29.652083] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2018-07-11 20:31:29.667235] I [run.c:190:runner_log] (-->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0xdef0a) [0x7f484aa22f0a] -->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0xde9cd) [0x7f484aa229cd] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7f4855f22e05] ) 0-management: Ran script: /var/lib/glusterd/hooks/1/set/post/S30samba-set.sh --volname=gv1 -o network.ping-timeout=75 --gd-workdir=/var/lib/glusterd [2018-07-11 20:31:29.677260] I [run.c:190:runner_log] (-->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0xdef0a) [0x7f484aa22f0a] -->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0xde9cd) [0x7f484aa229cd] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7f4855f22e05] ) 0-management: Ran script: /var/lib/glusterd/hooks/1/set/post/S32gluster_enable_shared_storage.sh --volname=gv1 -o network.ping-timeout=75 --gd-workdir=/var/lib/glusterd [2018-07-11 20:31:09.845955] I [MSGID: 106488] [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume] 0-management: Received get vol req [2018-07-11 20:31:29.651547] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad already stopped [2018-07-11 20:31:29.651560] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: quotad service is stopped [2018-07-11 20:31:29.652116] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: scrub service is stopped [2018-07-11 20:45:09.088433] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad already stopped [2018-07-11 20:45:09.088473] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: quotad service is stopped [2018-07-11 20:45:09.088693] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2018-07-11 20:45:09.088723] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: bitd service is stopped [2018-07-11 20:45:09.088940] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2018-07-11 20:45:09.088968] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: scrub service is stopped [2018-07-11 20:45:09.103764] I [run.c:190:runner_log] (-->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0xdef0a) [0x7f484aa22f0a] -->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0xde9cd) [0x7f484aa229cd] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7f4855f22e05] ) 0-management: Ran script: /var/lib/glusterd/hooks/1/set/post/S30samba-set.sh --volname=gv1 -o network.ping-timeout=50 --gd-workdir=/var/lib/glusterd [2018-07-11 20:45:09.113923] I [run.c:190:runner_log] (-->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0xdef0a) [0x7f484aa22f0a] -->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0xde9cd) [0x7f484aa229cd] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7f4855f22e05] ) 0-management: Ran script: /var/lib/glusterd/hooks/1/set/post/S32gluster_enable_shared_storage.sh --volname=gv1 -o network.ping-timeout=50 --gd-workdir=/var/lib/glusterd [2018-07-11 20:45:14.751616] I [MSGID: 106488] [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume] 0-management: Received get vol req [2018-07-11 21:11:00.957966] I [MSGID: 106488] [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume] 0-management: Received get vol req [2018-07-11 21:14:52.630975] I [MSGID: 106499] [glusterd-handler.c:4303:__glusterd_handle_status_volume] 0-management: Received status volume req for volume gv1 [2018-07-11 21:24:48.684967] I [MSGID: 106488] [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume] 0-management: Received get vol req [2018-07-11 21:30:31.125726] I [MSGID: 106488] [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume] 0-management: Received get vol req
[2018-07-11 19:16:09.169704] I [MSGID: 106004] [glusterd-handler.c:6317:__glusterd_peer_rpc_notify] 0-management: Peer <10.4.16.12> (<dfe01058-5bea-4b67-8859-382a2c8854f4>), in state <Peer in Cluster>, has disconnected from glusterd. [2018-07-11 19:16:09.194170] W [glusterd-locks.c:843:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0x2322a) [0x7f041e73a22a] -->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0x2d198) [0x7f041e744198] -->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0xe4765) [0x7f041e7fb765] ) 0-management: Lock for vol EXPORTB not held [2018-07-11 19:16:09.194225] W [MSGID: 106118] [glusterd-handler.c:6342:__glusterd_peer_rpc_notify] 0-management: Lock not released for EXPORTB [2018-07-11 19:16:09.194285] W [glusterd-locks.c:843:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0x2322a) [0x7f041e73a22a] -->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0x2d198) [0x7f041e744198] -->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0xe4765) [0x7f041e7fb765] ) 0-management: Lock for vol gv1 not held [2018-07-11 19:16:09.194308] W [MSGID: 106118] [glusterd-handler.c:6342:__glusterd_peer_rpc_notify] 0-management: Lock not released for gv1 [2018-07-11 19:16:12.469414] I [MSGID: 106004] [glusterd-handler.c:6317:__glusterd_peer_rpc_notify] 0-management: Peer <10.4.16.19> (<238af98a-d2f1-491d-a1f1-64ace4eb6d3d>), in state <Peer in Cluster>, has disconnected from glusterd. [2018-07-11 19:16:12.469547] W [glusterd-locks.c:843:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0x2322a) [0x7f041e73a22a] -->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0x2d198) [0x7f041e744198] -->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0xe4765) [0x7f041e7fb765] ) 0-management: Lock for vol EXPORTB not held [2018-07-11 19:16:12.469662] W [glusterd-locks.c:843:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0x2322a) [0x7f041e73a22a] -->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0x2d198) [0x7f041e744198] -->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0xe4765) [0x7f041e7fb765] ) 0-management: Lock for vol gv1 not held [2018-07-11 19:16:12.469817] C [MSGID: 106002] [glusterd-server-quorum.c:360:glusterd_do_volume_quorum_action] 0-management: Server quorum lost for volume gv1. Stopping local bricks. [2018-07-11 19:16:12.511162] I [MSGID: 106542] [glusterd-utils.c:8099:glusterd_brick_signal] 0-glusterd: sending signal 15 to brick with pid 29267 [2018-07-11 19:16:13.516061] I [MSGID: 106144] [glusterd-pmap.c:396:pmap_registry_remove] 0-pmap: removing brick /glusterfs/data1/gv1 on port 49153 [2018-07-11 19:16:13.527126] W [glusterd-handler.c:6064:__glusterd_brick_rpc_notify] 0-management: got disconnect from stale rpc on /glusterfs/data1/gv1 [2018-07-11 19:16:19.869621] I [MSGID: 106493] [glusterd-rpc-ops.c:486:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: dfe01058-5bea-4b67-8859-382a2c8854f4, host: 10.4.16.12, port: 0 [2018-07-11 19:16:19.872362] C [MSGID: 106003] [glusterd-server-quorum.c:354:glusterd_do_volume_quorum_action] 0-management: Server quorum regained for volume gv1. Starting local bricks. [2018-07-11 19:16:19.878165] I [glusterd-utils.c:5941:glusterd_brick_start] 0-management: starting a fresh brick process for brick /glusterfs/data1/gv1 [2018-07-11 19:16:19.887807] I [MSGID: 106144] [glusterd-pmap.c:396:pmap_registry_remove] 0-pmap: removing brick /glusterfs/data1/gv1 on port 49153 [2018-07-11 19:16:19.895522] I [rpc-clnt.c:1044:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2018-07-11 19:16:20.036160] I [MSGID: 106492] [glusterd-handler.c:2718:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: dfe01058-5bea-4b67-8859-382a2c8854f4 [2018-07-11 19:16:20.036226] I [MSGID: 106502] [glusterd-handler.c:2763:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2018-07-11 19:16:20.038639] I [socket.c:2474:socket_event_handler] 0-transport: EPOLLERR - disconnecting now [2018-07-11 19:16:20.039312] I [MSGID: 106005] [glusterd-handler.c:6071:__glusterd_brick_rpc_notify] 0-management: Brick 10.4.16.11:/glusterfs/data1/gv1 has disconnected from glusterd. [2018-07-11 19:16:20.048473] I [MSGID: 106568] [glusterd-proc-mgmt.c:87:glusterd_proc_stop] 0-management: Stopping nfs daemon running in pid: 25859 [2018-07-11 19:16:21.048804] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: nfs service is stopped [2018-07-11 19:16:21.058007] I [MSGID: 106540] [glusterd-utils.c:4939:glusterd_nfs_pmap_deregister] 0-glusterd: De-registered MOUNTV3 successfully [2018-07-11 19:16:21.058876] I [MSGID: 106540] [glusterd-utils.c:4948:glusterd_nfs_pmap_deregister] 0-glusterd: De-registered MOUNTV1 successfully [2018-07-11 19:16:21.059557] I [MSGID: 106540] [glusterd-utils.c:4957:glusterd_nfs_pmap_deregister] 0-glusterd: De-registered NFSV3 successfully [2018-07-11 19:16:21.060186] I [MSGID: 106540] [glusterd-utils.c:4966:glusterd_nfs_pmap_deregister] 0-glusterd: De-registered NLM v4 successfully [2018-07-11 19:16:21.060870] I [MSGID: 106540] [glusterd-utils.c:4975:glusterd_nfs_pmap_deregister] 0-glusterd: De-registered NLM v1 successfully [2018-07-11 19:16:21.061540] I [MSGID: 106540] [glusterd-utils.c:4984:glusterd_nfs_pmap_deregister] 0-glusterd: De-registered ACL v3 successfully [2018-07-11 19:16:21.077863] I [MSGID: 106567] [glusterd-svc-mgmt.c:197:glusterd_svc_start] 0-management: Starting nfs service [2018-07-11 19:16:21.089629] I [MSGID: 106568] [glusterd-proc-mgmt.c:87:glusterd_proc_stop] 0-management: Stopping glustershd daemon running in pid: 25868 [2018-07-11 19:16:22.089972] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: glustershd service is stopped [2018-07-11 19:16:22.090463] I [MSGID: 106567] [glusterd-svc-mgmt.c:197:glusterd_svc_start] 0-management: Starting glustershd service [2018-07-11 19:16:22.100484] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad already stopped [2018-07-11 19:16:22.100557] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: quotad service is stopped [2018-07-11 19:16:22.101108] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2018-07-11 19:16:22.101160] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: bitd service is stopped [2018-07-11 19:16:22.102896] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2018-07-11 19:16:22.102952] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: scrub service is stopped [2018-07-11 19:16:22.103073] I [glusterd-utils.c:5847:glusterd_brick_start] 0-management: discovered already-running brick /glusterfs/data1/gv1 [2018-07-11 19:16:22.103102] I [MSGID: 106143] [glusterd-pmap.c:295:pmap_registry_bind] 0-pmap: adding brick /glusterfs/data1/gv1 on port 49152 [2018-07-11 19:16:22.119656] I [MSGID: 106493] [glusterd-rpc-ops.c:701:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: dfe01058-5bea-4b67-8859-382a2c8854f4 [2018-07-11 19:16:22.199093] I [MSGID: 106163] [glusterd-handshake.c:1316:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 30600 [2018-07-11 19:16:22.225998] I [MSGID: 106490] [glusterd-handler.c:2540:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: dfe01058-5bea-4b67-8859-382a2c8854f4 [2018-07-11 19:16:22.265587] I [MSGID: 106493] [glusterd-handler.c:3800:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to 10.4.16.12 (0), ret: 0, op_ret: 0 [2018-07-11 19:16:22.267862] I [MSGID: 106492] [glusterd-handler.c:2718:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: dfe01058-5bea-4b67-8859-382a2c8854f4 [2018-07-11 19:16:22.267909] I [MSGID: 106502] [glusterd-handler.c:2763:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2018-07-11 19:16:22.269242] I [MSGID: 106493] [glusterd-rpc-ops.c:701:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: dfe01058-5bea-4b67-8859-382a2c8854f4 [2018-07-11 19:16:22.311490] I [MSGID: 106143] [glusterd-pmap.c:295:pmap_registry_bind] 0-pmap: adding brick /glusterfs/data1/gv1 on port 49152 [2018-07-11 19:16:23.875916] I [MSGID: 106493] [glusterd-rpc-ops.c:486:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: 238af98a-d2f1-491d-a1f1-64ace4eb6d3d, host: 10.4.16.19, port: 0 [2018-07-11 19:16:23.879101] I [MSGID: 106568] [glusterd-proc-mgmt.c:87:glusterd_proc_stop] 0-management: Stopping nfs daemon running in pid: 27647 [2018-07-11 19:16:24.879383] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: nfs service is stopped [2018-07-11 19:16:24.880312] I [MSGID: 106540] [glusterd-utils.c:4939:glusterd_nfs_pmap_deregister] 0-glusterd: De-registered MOUNTV3 successfully [2018-07-11 19:16:24.880984] I [MSGID: 106540] [glusterd-utils.c:4948:glusterd_nfs_pmap_deregister] 0-glusterd: De-registered MOUNTV1 successfully [2018-07-11 19:16:24.881688] I [MSGID: 106540] [glusterd-utils.c:4957:glusterd_nfs_pmap_deregister] 0-glusterd: De-registered NFSV3 successfully [2018-07-11 19:16:24.882416] I [MSGID: 106540] [glusterd-utils.c:4966:glusterd_nfs_pmap_deregister] 0-glusterd: De-registered NLM v4 successfully [2018-07-11 19:16:24.883074] I [MSGID: 106540] [glusterd-utils.c:4975:glusterd_nfs_pmap_deregister] 0-glusterd: De-registered NLM v1 successfully [2018-07-11 19:16:24.883774] I [MSGID: 106540] [glusterd-utils.c:4984:glusterd_nfs_pmap_deregister] 0-glusterd: De-registered ACL v3 successfully [2018-07-11 19:16:24.911258] I [MSGID: 106567] [glusterd-svc-mgmt.c:197:glusterd_svc_start] 0-management: Starting nfs service [2018-07-11 19:16:25.916805] I [MSGID: 106568] [glusterd-proc-mgmt.c:87:glusterd_proc_stop] 0-management: Stopping glustershd daemon running in pid: 27656 [2018-07-11 19:16:26.917114] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: glustershd service is stopped [2018-07-11 19:16:26.917319] I [MSGID: 106567] [glusterd-svc-mgmt.c:197:glusterd_svc_start] 0-management: Starting glustershd service [2018-07-11 19:16:26.920748] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad already stopped [2018-07-11 19:16:26.920826] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: quotad service is stopped [2018-07-11 19:16:26.920982] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2018-07-11 19:16:26.921019] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: bitd service is stopped [2018-07-11 19:16:26.921170] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2018-07-11 19:16:26.921250] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: scrub service is stopped [2018-07-11 19:16:26.939173] I [MSGID: 106493] [glusterd-rpc-ops.c:701:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 238af98a-d2f1-491d-a1f1-64ace4eb6d3d [2018-07-11 19:16:26.940117] I [MSGID: 106163] [glusterd-handshake.c:1316:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 30600 [2018-07-11 19:16:26.942732] I [MSGID: 106490] [glusterd-handler.c:2540:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: 238af98a-d2f1-491d-a1f1-64ace4eb6d3d [2018-07-11 19:16:26.957159] I [MSGID: 106493] [glusterd-handler.c:3800:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to 10.4.16.19 (0), ret: 0, op_ret: 0 [2018-07-11 19:16:26.959952] I [MSGID: 106492] [glusterd-handler.c:2718:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 238af98a-d2f1-491d-a1f1-64ace4eb6d3d [2018-07-11 19:16:26.961122] I [MSGID: 106502] [glusterd-handler.c:2763:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2018-07-11 19:16:26.961468] I [MSGID: 106493] [glusterd-rpc-ops.c:701:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 238af98a-d2f1-491d-a1f1-64ace4eb6d3d [2018-07-11 19:16:12.469635] W [MSGID: 106118] [glusterd-handler.c:6342:__glusterd_peer_rpc_notify] 0-management: Lock not released for EXPORTB [2018-07-11 19:16:12.469735] W [MSGID: 106118] [glusterd-handler.c:6342:__glusterd_peer_rpc_notify] 0-management: Lock not released for gv1 [2018-07-11 19:29:40.482273] I [MSGID: 106488] [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume] 0-management: Received get vol req [2018-07-11 19:32:05.242773] I [MSGID: 106488] [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume] 0-management: Received get vol req The message "I [MSGID: 106488] [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume] 0-management: Received get vol req" repeated 4 times between [2018-07-11 19:32:05.242773] and [2018-07-11 19:32:17.703948] [2018-07-11 20:06:17.222999] I [MSGID: 106488] [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume] 0-management: Received get vol req [2018-07-11 20:27:18.157431] I [MSGID: 106488] [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume] 0-management: Received get vol req The message "I [MSGID: 106488] [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume] 0-management: Received get vol req" repeated 2 times between [2018-07-11 20:27:18.157431] and [2018-07-11 20:27:26.118064] [2018-07-11 20:31:06.739644] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad already stopped [2018-07-11 20:31:06.739695] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: quotad service is stopped [2018-07-11 20:31:06.739968] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2018-07-11 20:31:06.739999] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: bitd service is stopped [2018-07-11 20:31:06.740280] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2018-07-11 20:31:06.740314] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: scrub service is stopped [2018-07-11 20:31:06.789081] I [run.c:190:runner_log] (-->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0xdef0a) [0x7f041e7f5f0a] -->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0xde9cd) [0x7f041e7f59cd] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7f0429cf5e05] ) 0-management: Ran script: /var/lib/glusterd/hooks/1/set/post/S30samba-set.sh --volname=gv1 -o network.ping-timeout=75 --gd-workdir=/var/lib/glusterd [2018-07-11 20:31:06.799061] I [run.c:190:runner_log] (-->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0xdef0a) [0x7f041e7f5f0a] -->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0xde9cd) [0x7f041e7f59cd] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7f0429cf5e05] ) 0-management: Ran script: /var/lib/glusterd/hooks/1/set/post/S32gluster_enable_shared_storage.sh --volname=gv1 -o network.ping-timeout=75 --gd-workdir=/var/lib/glusterd [2018-07-11 20:31:29.606419] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: bitd service is stopped [2018-07-11 20:31:29.606646] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2018-07-11 20:31:29.621124] I [run.c:190:runner_log] (-->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0xdef0a) [0x7f041e7f5f0a] -->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0xde9cd) [0x7f041e7f59cd] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7f0429cf5e05] ) 0-management: Ran script: /var/lib/glusterd/hooks/1/set/post/S30samba-set.sh --volname=gv1 -o network.ping-timeout=75 --gd-workdir=/var/lib/glusterd [2018-07-11 20:31:29.630222] I [run.c:190:runner_log] (-->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0xdef0a) [0x7f041e7f5f0a] -->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0xde9cd) [0x7f041e7f59cd] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7f0429cf5e05] ) 0-management: Ran script: /var/lib/glusterd/hooks/1/set/post/S32gluster_enable_shared_storage.sh --volname=gv1 -o network.ping-timeout=75 --gd-workdir=/var/lib/glusterd [2018-07-11 20:31:29.606705] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: scrub service is stopped [2018-07-11 20:31:31.751610] I [MSGID: 106488] [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume] 0-management: Received get vol req [2018-07-11 20:31:29.606153] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad already stopped [2018-07-11 20:31:29.606166] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: quotad service is stopped [2018-07-11 20:31:29.606406] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2018-07-11 20:45:09.128671] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad already stopped [2018-07-11 20:45:09.128715] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: quotad service is stopped [2018-07-11 20:45:09.128950] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2018-07-11 20:45:09.128980] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: bitd service is stopped [2018-07-11 20:45:09.129232] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2018-07-11 20:45:09.129265] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: scrub service is stopped [2018-07-11 20:45:09.145685] I [run.c:190:runner_log] (-->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0xdef0a) [0x7f041e7f5f0a] -->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0xde9cd) [0x7f041e7f59cd] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7f0429cf5e05] ) 0-management: Ran script: /var/lib/glusterd/hooks/1/set/post/S30samba-set.sh --volname=gv1 -o network.ping-timeout=50 --gd-workdir=/var/lib/glusterd [2018-07-11 20:45:09.155568] I [run.c:190:runner_log] (-->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0xdef0a) [0x7f041e7f5f0a] -->/usr/lib64/glusterfs/3.12.6/xlator/mgmt/glusterd.so(+0xde9cd) [0x7f041e7f59cd] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7f0429cf5e05] ) 0-management: Ran script: /var/lib/glusterd/hooks/1/set/post/S32gluster_enable_shared_storage.sh --volname=gv1 -o network.ping-timeout=50 --gd-workdir=/var/lib/glusterd [2018-07-11 20:45:21.089382] I [MSGID: 106488] [glusterd-handler.c:1548:__glusterd_handle_cli_get_volume] 0-management: Received get vol req
[2018-07-11 19:16:14.918389] W [socket.c:593:__socket_rwv] 0-gv1-client-2: readv on 10.4.16.19:49153 failed (No data available) [2018-07-11 19:16:14.918495] I [MSGID: 114018] [client.c:2285:client_rpc_notify] 0-gv1-client-2: disconnected from gv1-client-2. Client process will keep trying to connect to glusterd until brick's port is available [2018-07-11 19:16:14.920183] E [rpc-clnt.c:350:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fd882449efb] (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fd88220ee6e] (--> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fd88220ef8e] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x90)[0x7fd882210710] (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x2a0)[0x7fd882211200] ))))) 0-gv1-client-2: forced unwinding frame type(GlusterFS 3.3) op(FINODELK(30)) called at 2018-07-11 19:15:47.548069 (xid=0x1be2e4) [2018-07-11 19:16:14.920207] E [MSGID: 114031] [client-rpc-fops.c:1557:client3_3_finodelk_cbk] 0-gv1-client-2: remote operation failed [Transport endpoint is not connected] [2018-07-11 19:16:14.920386] E [rpc-clnt.c:350:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fd882449efb] (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fd88220ee6e] (--> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fd88220ef8e] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x90)[0x7fd882210710] (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x2a0)[0x7fd882211200] ))))) 0-gv1-client-2: forced unwinding frame type(GlusterFS 3.3) op(FINODELK(30)) called at 2018-07-11 19:15:52.555712 (xid=0x1be2e5) [2018-07-11 19:16:14.920559] E [rpc-clnt.c:350:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fd882449efb] (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fd88220ee6e] (--> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fd88220ef8e] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x90)[0x7fd882210710] (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x2a0)[0x7fd882211200] ))))) 0-gv1-client-2: forced unwinding frame type(GlusterFS 3.3) op(FINODELK(30)) called at 2018-07-11 19:15:52.555852 (xid=0x1be2e6) [2018-07-11 19:16:14.920732] E [rpc-clnt.c:350:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fd882449efb] (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fd88220ee6e] (--> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fd88220ef8e] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x90)[0x7fd882210710] (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x2a0)[0x7fd882211200] ))))) 0-gv1-client-2: forced unwinding frame type(GlusterFS 3.3) op(FINODELK(30)) called at 2018-07-11 19:15:52.556010 (xid=0x1be2e7) [2018-07-11 19:16:14.920919] E [rpc-clnt.c:350:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fd882449efb] (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fd88220ee6e] (--> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fd88220ef8e] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x90)[0x7fd882210710] (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x2a0)[0x7fd882211200] ))))) 0-gv1-client-2: forced unwinding frame type(GlusterFS 3.3) op(FINODELK(30)) called at 2018-07-11 19:15:53.419502 (xid=0x1be2e9) [2018-07-11 19:16:14.921094] E [rpc-clnt.c:350:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fd882449efb] (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fd88220ee6e] (--> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fd88220ef8e] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x90)[0x7fd882210710] (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x2a0)[0x7fd882211200] ))))) 0-gv1-client-2: forced unwinding frame type(GF-DUMP) op(NULL(2)) called at 2018-07-11 19:15:53.341989 (xid=0x1be2e8) [2018-07-11 19:16:14.921114] W [rpc-clnt-ping.c:223:rpc_clnt_ping_cbk] 0-gv1-client-2: socket disconnected [2018-07-11 19:16:14.921274] E [rpc-clnt.c:350:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fd882449efb] (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fd88220ee6e] (--> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fd88220ef8e] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x90)[0x7fd882210710] (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x2a0)[0x7fd882211200] ))))) 0-gv1-client-2: forced unwinding frame type(GlusterFS 3.3) op(FSTAT(25)) called at 2018-07-11 19:15:54.292390 (xid=0x1be2ea) [2018-07-11 19:16:14.921292] W [MSGID: 114031] [client-rpc-fops.c:1459:client3_3_fstat_cbk] 0-gv1-client-2: remote operation failed [Transport endpoint is not connected] [2018-07-11 19:16:14.921328] W [MSGID: 114061] [client-common.c:704:client_pre_fstat] 0-gv1-client-2: (037fd1d1-71e7-4553-92ba-9a84c02bb63c) remote_fd is -1. EBADFD [File descriptor in bad state] [2018-07-11 19:16:16.231994] W [socket.c:593:__socket_rwv] 0-gv1-client-4: readv on 10.4.16.12:49152 failed (No data available) [2018-07-11 19:16:16.232035] I [MSGID: 114018] [client.c:2285:client_rpc_notify] 0-gv1-client-4: disconnected from gv1-client-4. Client process will keep trying to connect to glusterd until brick's port is available [2018-07-11 19:16:16.232056] W [MSGID: 108001] [afr-common.c:5319:afr_notify] 0-gv1-replicate-0: Client-quorum is not met [2018-07-11 19:16:16.232602] E [rpc-clnt.c:350:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fd882449efb] (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fd88220ee6e] (--> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fd88220ef8e] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x90)[0x7fd882210710] (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x2a0)[0x7fd882211200] ))))) 0-gv1-client-4: forced unwinding frame type(GlusterFS 3.3) op(FINODELK(30)) called at 2018-07-11 19:15:47.548098 (xid=0x2f323e) [2018-07-11 19:16:16.232621] E [MSGID: 114031] [client-rpc-fops.c:1557:client3_3_finodelk_cbk] 0-gv1-client-4: remote operation failed [Transport endpoint is not connected] [2018-07-11 19:16:16.232792] E [rpc-clnt.c:350:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fd882449efb] (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fd88220ee6e] (--> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fd88220ef8e] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x90)[0x7fd882210710] (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x2a0)[0x7fd882211200] ))))) 0-gv1-client-4: forced unwinding frame type(GlusterFS 3.3) op(FINODELK(30)) called at 2018-07-11 19:15:52.555732 (xid=0x2f323f) [2018-07-11 19:16:16.233004] E [rpc-clnt.c:350:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fd882449efb] (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fd88220ee6e] (--> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fd88220ef8e] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x90)[0x7fd882210710] (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x2a0)[0x7fd882211200] ))))) 0-gv1-client-4: forced unwinding frame type(GlusterFS 3.3) op(FINODELK(30)) called at 2018-07-11 19:15:52.555884 (xid=0x2f3240) [2018-07-11 19:16:16.233179] E [rpc-clnt.c:350:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fd882449efb] (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fd88220ee6e] (--> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fd88220ef8e] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x90)[0x7fd882210710] (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x2a0)[0x7fd882211200] ))))) 0-gv1-client-4: forced unwinding frame type(GlusterFS 3.3) op(FINODELK(30)) called at 2018-07-11 19:15:52.556033 (xid=0x2f3241) [2018-07-11 19:16:16.233350] E [rpc-clnt.c:350:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fd882449efb] (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fd88220ee6e] (--> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fd88220ef8e] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x90)[0x7fd882210710] (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x2a0)[0x7fd882211200] ))))) 0-gv1-client-4: forced unwinding frame type(GlusterFS 3.3) op(FINODELK(30)) called at 2018-07-11 19:15:53.419536 (xid=0x2f3242) [2018-07-11 19:16:16.233521] E [rpc-clnt.c:350:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fd882449efb] (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fd88220ee6e] (--> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fd88220ef8e] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x90)[0x7fd882210710] (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x2a0)[0x7fd882211200] ))))) 0-gv1-client-4: forced unwinding frame type(GlusterFS 3.3) op(FSTAT(25)) called at 2018-07-11 19:16:04.200401 (xid=0x2f3243) The message "E [MSGID: 114031] [client-rpc-fops.c:1557:client3_3_finodelk_cbk] 0-gv1-client-4: remote operation failed [Transport endpoint is not connected]" repeated 4 times between [2018-07-11 19:16:16.232621] and [2018-07-11 19:16:16.233368] [2018-07-11 19:16:16.233540] W [MSGID: 114031] [client-rpc-fops.c:1459:client3_3_fstat_cbk] 0-gv1-client-4: remote operation failed [Transport endpoint is not connected] [2018-07-11 19:16:16.233578] W [MSGID: 114061] [client-common.c:704:client_pre_fstat] 0-gv1-client-2: (de4eb8bc-f5fa-4da8-b907-1562cbd38dfe) remote_fd is -1. EBADFD [File descriptor in bad state] [2018-07-11 19:16:16.233609] W [MSGID: 114061] [client-common.c:704:client_pre_fstat] 0-gv1-client-4: (de4eb8bc-f5fa-4da8-b907-1562cbd38dfe) remote_fd is -1. EBADFD [File descriptor in bad state] [2018-07-11 19:16:16.233779] E [rpc-clnt.c:350:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fd882449efb] (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fd88220ee6e] (--> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fd88220ef8e] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x90)[0x7fd882210710] (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x2a0)[0x7fd882211200] ))))) 0-gv1-client-4: forced unwinding frame type(GF-DUMP) op(NULL(2)) called at 2018-07-11 19:16:14.345044 (xid=0x2f3244) [2018-07-11 19:16:16.233797] W [rpc-clnt-ping.c:223:rpc_clnt_ping_cbk] 0-gv1-client-4: socket disconnected [2018-07-11 19:16:16.233968] E [rpc-clnt.c:350:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fd882449efb] (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fd88220ee6e] (--> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fd88220ef8e] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x90)[0x7fd882210710] (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x2a0)[0x7fd882211200] ))))) 0-gv1-client-4: forced unwinding frame type(GlusterFS 3.3) op(FSTAT(25)) called at 2018-07-11 19:16:14.921385 (xid=0x2f3245) [2018-07-11 19:16:16.233992] W [MSGID: 114031] [client-rpc-fops.c:1459:client3_3_fstat_cbk] 0-gv1-client-4: remote operation failed [Transport endpoint is not connected] [2018-07-11 19:16:16.362403] W [socket.c:593:__socket_rwv] 0-gv1-client-3: readv on 10.4.16.11:49153 failed (No data available) [2018-07-11 19:16:16.362449] I [MSGID: 114018] [client.c:2285:client_rpc_notify] 0-gv1-client-3: disconnected from gv1-client-3. Client process will keep trying to connect to glusterd until brick's port is available [2018-07-11 19:16:16.362472] E [MSGID: 108006] [afr-common.c:5092:__afr_handle_child_down_event] 0-gv1-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up. [2018-07-11 19:16:16.362729] E [rpc-clnt.c:350:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fd882449efb] (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fd88220ee6e] (--> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fd88220ef8e] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x90)[0x7fd882210710] (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x2a0)[0x7fd882211200] ))))) 0-gv1-client-3: forced unwinding frame type(GlusterFS 3.3) op(FINODELK(30)) called at 2018-07-11 19:15:47.548081 (xid=0x1e3320) [2018-07-11 19:16:16.362749] E [MSGID: 114031] [client-rpc-fops.c:1557:client3_3_finodelk_cbk] 0-gv1-client-3: remote operation failed [Transport endpoint is not connected] [2018-07-11 19:16:16.362775] E [MSGID: 114031] [client-rpc-fops.c:1557:client3_3_finodelk_cbk] 0-gv1-client-3: remote operation failed [Transport endpoint is not connected] [2018-07-11 19:16:16.362778] E [MSGID: 114031] [client-rpc-fops.c:1557:client3_3_finodelk_cbk] 0-gv1-client-4: remote operation failed [Transport endpoint is not connected] [2018-07-11 19:16:16.362800] W [MSGID: 108019] [afr-lk-common.c:1102:is_blocking_locks_count_sufficient] 0-gv1-replicate-0: Unable to obtain blocking inode lock on even one child for gfid:59555375-1db1-428a-a7f2-08dcd79f0800. [2018-07-11 19:16:16.362811] I [MSGID: 108019] [afr-transaction.c:1820:afr_post_blocking_inodelk_cbk] 0-gv1-replicate-0: Blocking inodelks failed. [2018-07-11 19:16:16.363048] E [rpc-clnt.c:350:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fd882449efb] (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fd88220ee6e] (--> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fd88220ef8e] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x90)[0x7fd882210710] (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x2a0)[0x7fd882211200] ))))) 0-gv1-client-3: forced unwinding frame type(GlusterFS 3.3) op(FINODELK(30)) called at 2018-07-11 19:15:52.555722 (xid=0x1e3322) [2018-07-11 19:16:16.363068] E [MSGID: 114031] [client-rpc-fops.c:1557:client3_3_finodelk_cbk] 0-gv1-client-3: remote operation failed [Transport endpoint is not connected] [2018-07-11 19:16:16.363090] E [MSGID: 114031] [client-rpc-fops.c:1557:client3_3_finodelk_cbk] 0-gv1-client-3: remote operation failed [Transport endpoint is not connected] [2018-07-11 19:16:16.363092] E [MSGID: 114031] [client-rpc-fops.c:1557:client3_3_finodelk_cbk] 0-gv1-client-4: remote operation failed [Transport endpoint is not connected] [2018-07-11 19:16:16.363112] W [MSGID: 108019] [afr-lk-common.c:1102:is_blocking_locks_count_sufficient] 0-gv1-replicate-0: Unable to obtain blocking inode lock on even one child for gfid:59555375-1db1-428a-a7f2-08dcd79f0800. [2018-07-11 19:16:16.363122] I [MSGID: 108019] [afr-transaction.c:1820:afr_post_blocking_inodelk_cbk] 0-gv1-replicate-0: Blocking inodelks failed. [2018-07-11 19:16:16.363317] E [rpc-clnt.c:350:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fd882449efb] (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fd88220ee6e] (--> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fd88220ef8e] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x90)[0x7fd882210710] (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x2a0)[0x7fd882211200] ))))) 0-gv1-client-3: forced unwinding frame type(GlusterFS 3.3) op(FINODELK(30)) called at 2018-07-11 19:15:52.555873 (xid=0x1e3323) [2018-07-11 19:16:16.363340] E [MSGID: 114031] [client-rpc-fops.c:1557:client3_3_finodelk_cbk] 0-gv1-client-3: remote operation failed [Transport endpoint is not connected] [2018-07-11 19:16:16.363361] E [MSGID: 114031] [client-rpc-fops.c:1557:client3_3_finodelk_cbk] 0-gv1-client-3: remote operation failed [Transport endpoint is not connected] [2018-07-11 19:16:16.363363] E [MSGID: 114031] [client-rpc-fops.c:1557:client3_3_finodelk_cbk] 0-gv1-client-4: remote operation failed [Transport endpoint is not connected] [2018-07-11 19:16:16.363383] W [MSGID: 108019] [afr-lk-common.c:1102:is_blocking_locks_count_sufficient] 0-gv1-replicate-0: Unable to obtain blocking inode lock on even one child for gfid:59555375-1db1-428a-a7f2-08dcd79f0800. [2018-07-11 19:16:16.363393] I [MSGID: 108019] [afr-transaction.c:1820:afr_post_blocking_inodelk_cbk] 0-gv1-replicate-0: Blocking inodelks failed. [2018-07-11 19:16:16.363586] E [rpc-clnt.c:350:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fd882449efb] (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fd88220ee6e] (--> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fd88220ef8e] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x90)[0x7fd882210710] (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x2a0)[0x7fd882211200] ))))) 0-gv1-client-3: forced unwinding frame type(GlusterFS 3.3) op(FINODELK(30)) called at 2018-07-11 19:15:52.556024 (xid=0x1e3324) [2018-07-11 19:16:16.363604] E [MSGID: 114031] [client-rpc-fops.c:1557:client3_3_finodelk_cbk] 0-gv1-client-3: remote operation failed [Transport endpoint is not connected] [2018-07-11 19:16:16.363624] E [MSGID: 114031] [client-rpc-fops.c:1557:client3_3_finodelk_cbk] 0-gv1-client-3: remote operation failed [Transport endpoint is not connected] [2018-07-11 19:16:16.363626] E [MSGID: 114031] [client-rpc-fops.c:1557:client3_3_finodelk_cbk] 0-gv1-client-4: remote operation failed [Transport endpoint is not connected] [2018-07-11 19:16:16.363647] W [MSGID: 108019] [afr-lk-common.c:1102:is_blocking_locks_count_sufficient] 0-gv1-replicate-0: Unable to obtain blocking inode lock on even one child for gfid:59555375-1db1-428a-a7f2-08dcd79f0800. [2018-07-11 19:16:16.363657] I [MSGID: 108019] [afr-transaction.c:1820:afr_post_blocking_inodelk_cbk] 0-gv1-replicate-0: Blocking inodelks failed. [2018-07-11 19:16:16.363848] E [rpc-clnt.c:350:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fd882449efb] (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fd88220ee6e] (--> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fd88220ef8e] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x90)[0x7fd882210710] (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x2a0)[0x7fd882211200] ))))) 0-gv1-client-3: forced unwinding frame type(GlusterFS 3.3) op(FINODELK(30)) called at 2018-07-11 19:15:53.419521 (xid=0x1e3325) [2018-07-11 19:16:16.363877] E [MSGID: 114031] [client-rpc-fops.c:1557:client3_3_finodelk_cbk] 0-gv1-client-3: remote operation failed [Transport endpoint is not connected] [2018-07-11 19:16:16.363903] E [MSGID: 114031] [client-rpc-fops.c:1557:client3_3_finodelk_cbk] 0-gv1-client-3: remote operation failed [Transport endpoint is not connected] [2018-07-11 19:16:16.363905] E [MSGID: 114031] [client-rpc-fops.c:1557:client3_3_finodelk_cbk] 0-gv1-client-4: remote operation failed [Transport endpoint is not connected] [2018-07-11 19:16:16.363927] W [MSGID: 108019] [afr-lk-common.c:1102:is_blocking_locks_count_sufficient] 0-gv1-replicate-0: Unable to obtain blocking inode lock on even one child for gfid:59555375-1db1-428a-a7f2-08dcd79f0800. [2018-07-11 19:16:16.363937] I [MSGID: 108019] [afr-transaction.c:1820:afr_post_blocking_inodelk_cbk] 0-gv1-replicate-0: Blocking inodelks failed. [2018-07-11 19:16:16.363978] W [fuse-bridge.c:1381:fuse_err_cbk] 0-glusterfs-fuse: 3673856: FSYNC() ERR => -1 (Transport endpoint is not connected) [2018-07-11 19:16:16.364185] E [rpc-clnt.c:350:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fd882449efb] (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fd88220ee6e] (--> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fd88220ef8e] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x90)[0x7fd882210710] (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x2a0)[0x7fd882211200] ))))) 0-gv1-client-3: forced unwinding frame type(GlusterFS 3.3) op(STAT(1)) called at 2018-07-11 19:15:45.968944 (xid=0x1e331e) [2018-07-11 19:16:16.364203] W [MSGID: 114031] [client-rpc-fops.c:493:client3_3_stat_cbk] 0-gv1-client-3: remote operation failed [Transport endpoint is not connected] [2018-07-11 19:16:16.364239] W [MSGID: 114031] [client-rpc-fops.c:2860:client3_3_lookup_cbk] 0-gv1-client-2: remote operation failed. Path: / (00000000-0000-0000-0000-000000000001) [Transport endpoint is not connected] [2018-07-11 19:16:16.364258] W [MSGID: 114031] [client-rpc-fops.c:2860:client3_3_lookup_cbk] 0-gv1-client-3: remote operation failed. Path: / (00000000-0000-0000-0000-000000000001) [Transport endpoint is not connected] [2018-07-11 19:16:16.364273] W [MSGID: 114031] [client-rpc-fops.c:2860:client3_3_lookup_cbk] 0-gv1-client-4: remote operation failed. Path: / (00000000-0000-0000-0000-000000000001) [Transport endpoint is not connected] [2018-07-11 19:16:16.364324] I [MSGID: 108006] [afr-common.c:5372:afr_local_init] 0-gv1-replicate-0: no subvolumes up [2018-07-11 19:16:16.364345] W [fuse-bridge.c:871:fuse_attr_cbk] 0-glusterfs-fuse: 3673843: LOOKUP() / => -1 (Transport endpoint is not connected) [2018-07-11 19:16:16.364527] E [rpc-clnt.c:350:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fd882449efb] (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fd88220ee6e] (--> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fd88220ef8e] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x90)[0x7fd882210710] (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x2a0)[0x7fd882211200] ))))) 0-gv1-client-3: forced unwinding frame type(GlusterFS 3.3) op(STAT(1)) called at 2018-07-11 19:15:46.656801 (xid=0x1e331f) [2018-07-11 19:16:16.364546] W [MSGID: 114031] [client-rpc-fops.c:493:client3_3_stat_cbk] 0-gv1-client-3: remote operation failed [Transport endpoint is not connected] [2018-07-11 19:16:16.364575] W [MSGID: 114031] [client-rpc-fops.c:2860:client3_3_lookup_cbk] 0-gv1-client-2: remote operation failed. Path: / (00000000-0000-0000-0000-000000000001) [Transport endpoint is not connected] [2018-07-11 19:16:16.364593] W [MSGID: 114031] [client-rpc-fops.c:2860:client3_3_lookup_cbk] 0-gv1-client-3: remote operation failed. Path: / (00000000-0000-0000-0000-000000000001) [Transport endpoint is not connected] [2018-07-11 19:16:16.364608] W [MSGID: 114031] [client-rpc-fops.c:2860:client3_3_lookup_cbk] 0-gv1-client-4: remote operation failed. Path: / (00000000-0000-0000-0000-000000000001) [Transport endpoint is not connected] [2018-07-11 19:16:16.364642] I [MSGID: 108006] [afr-common.c:5372:afr_local_init] 0-gv1-replicate-0: no subvolumes up [2018-07-11 19:16:16.364660] W [fuse-bridge.c:871:fuse_attr_cbk] 0-glusterfs-fuse: 3673844: LOOKUP() / => -1 (Transport endpoint is not connected) [2018-07-11 19:16:16.364841] E [rpc-clnt.c:350:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fd882449efb] (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fd88220ee6e] (--> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fd88220ef8e] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x90)[0x7fd882210710] (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x2a0)[0x7fd882211200] ))))) 0-gv1-client-3: forced unwinding frame type(GlusterFS 3.3) op(STAT(1)) called at 2018-07-11 19:15:48.587690 (xid=0x1e3321) [2018-07-11 19:16:16.364866] W [MSGID: 114031] [client-rpc-fops.c:493:client3_3_stat_cbk] 0-gv1-client-3: remote operation failed [Transport endpoint is not connected] [2018-07-11 19:16:16.364911] W [MSGID: 114031] [client-rpc-fops.c:2860:client3_3_lookup_cbk] 0-gv1-client-2: remote operation failed. Path: / (00000000-0000-0000-0000-000000000001) [Transport endpoint is not connected] [2018-07-11 19:16:16.364931] W [MSGID: 114031] [client-rpc-fops.c:2860:client3_3_lookup_cbk] 0-gv1-client-3: remote operation failed. Path: / (00000000-0000-0000-0000-000000000001) [Transport endpoint is not connected] [2018-07-11 19:16:16.364948] W [MSGID: 114031] [client-rpc-fops.c:2860:client3_3_lookup_cbk] 0-gv1-client-4: remote operation failed. Path: / (00000000-0000-0000-0000-000000000001) [Transport endpoint is not connected] [2018-07-11 19:16:16.364986] I [MSGID: 108006] [afr-common.c:5372:afr_local_init] 0-gv1-replicate-0: no subvolumes up [2018-07-11 19:16:16.365004] W [fuse-bridge.c:871:fuse_attr_cbk] 0-glusterfs-fuse: 3673847: LOOKUP() / => -1 (Transport endpoint is not connected) [2018-07-11 19:16:16.365187] E [rpc-clnt.c:350:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fd882449efb] (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fd88220ee6e ] (--> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fd88220ef8e] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x90)[0x7fd882210710] (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x2a0)[0x7fd882211200] ))))) 0-gv1-client-3: forced unwinding frame type(GF-DUMP) op(NULL(2)) called at 2018-07-11 19:16:03.343398 (xid=0x1e3326) [2018-07-11 19:16:16.365205] W [rpc-clnt-ping.c:223:rpc_clnt_ping_cbk] 0-gv1-client-3: socket disconnected [2018-07-11 19:16:16.365363] E [rpc-clnt.c:350:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fd882449efb] (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fd88220ee6e] (--> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fd88220ef8e] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x90)[0x7fd882210710] (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x2a0)[0x7fd882211200] ))))) 0-gv1-client-3: forced unwinding frame type(GlusterFS 3.3) op(FSTAT(25)) called at 2018-07-11 19:16:09.304742 (xid=0x1e3327) [2018-07-11 19:16:16.365381] W [MSGID: 114031] [client-rpc-fops.c:1459:client3_3_fstat_cbk] 0-gv1-client-3: remote operation failed [Transport endpoint is not connected] [2018-07-11 19:16:16.365409] W [MSGID: 114061] [client-common.c:704:client_pre_fstat] 0-gv1-client-2: (59555375-1db1-428a-a7f2-08dcd79f0800) remote_fd is -1. EBADFD [File descriptor in bad state] [2018-07-11 19:16:16.365423] W [MSGID: 114061] [client-common.c:704:client_pre_fstat] 0-gv1-client-3: (59555375-1db1-428a-a7f2-08dcd79f0800) remote_fd is -1. EBADFD [File descriptor in bad state] [2018-07-11 19:16:16.365436] W [MSGID: 114061] [client-common.c:704:client_pre_fstat] 0-gv1-client-4: (59555375-1db1-428a-a7f2-08dcd79f0800) remote_fd is -1. EBADFD [File descriptor in bad state] [2018-07-11 19:16:16.365455] W [fuse-bridge.c:871:fuse_attr_cbk] 0-glusterfs-fuse: 3673859: FSTAT() /9f87196f-17eb-4b9a-9f7f-adb377255b50/images/45e8d993-aa03-4905-b56c-5ead8e3a59a6/3e91c130-6aa0-44d7-8e8c-3128badb7607 => -1 (File descriptor in bad state) [2018-07-11 19:16:16.365638] E [rpc-clnt.c:350:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fd882449efb] (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fd88220ee6e] (--> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fd88220ef8e] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x90)[0x7fd882210710] (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x2a0)[0x7fd882211200] ))))) 0-gv1-client-3: forced unwinding frame type(GlusterFS 3.3) op(FSTAT(25)) called at 2018-07-11 19:16:14.921371 (xid=0x1e3328) [2018-07-11 19:16:16.365656] W [MSGID: 114031] [client-rpc-fops.c:1459:client3_3_fstat_cbk] 0-gv1-client-3: remote operation failed [Transport endpoint is not connected] [2018-07-11 19:16:16.365677] W [fuse-bridge.c:871:fuse_attr_cbk] 0-glusterfs-fuse: 3673857: FSTAT() /9f87196f-17eb-4b9a-9f7f-adb377255b50/images/38b23e13-da8b-431f-92fa-f019ce4c1cce/e03029a3-e8c6-4e08-8d69-14d8b079f5f6 => -1 (Transport endpoint is not connected) [2018-07-11 19:16:16.365963] E [rpc-clnt.c:350:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fd882449efb] (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fd88220ee6e] (--> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fd88220ef8e] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x90)[0x7fd882210710] (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x2a0)[0x7fd882211200] ))))) 0-gv1-client-3: forced unwinding frame type(GlusterFS 3.3) op(FSTAT(25)) called at 2018-07-11 19:16:16.233605 (xid=0x1e3329) [2018-07-11 19:16:16.366021] W [fuse-bridge.c:871:fuse_attr_cbk] 0-glusterfs-fuse: 3673858: FSTAT() /9f87196f-17eb-4b9a-9f7f-adb377255b50/dom_md/ids => -1 (File descriptor in bad state) [2018-07-11 19:16:16.365999] W [MSGID: 114031] [client-rpc-fops.c:1459:client3_3_fstat_cbk] 0-gv1-client-3: remote operation failed [Transport endpoint is not connected] [2018-07-11 19:16:16.369427] I [MSGID: 108006] [afr-common.c:5372:afr_local_init] 0-gv1-replicate-0: no subvolumes up [2018-07-11 19:16:16.369483] W [fuse-bridge.c:871:fuse_attr_cbk] 0-glusterfs-fuse: 3673860: FSTAT() /9f87196f-17eb-4b9a-9f7f-adb377255b50/images/38b23e13-da8b-431f-92fa-f019ce4c1cce/e03029a3-e8c6-4e08-8d69-14d8b079f5f6 => -1 (Transport endpoint is not connected) [2018-07-11 19:16:16.369573] W [fuse-bridge.c:871:fuse_attr_cbk] 0-glusterfs-fuse: 3673861: FSTAT() /9f87196f-17eb-4b9a-9f7f-adb377255b50/images/38b23e13-da8b-431f-92fa-f019ce4c1cce/e03029a3-e8c6-4e08-8d69-14d8b079f5f6 => -1 (Transport endpoint is not connected) [2018-07-11 19:16:16.372119] W [fuse-bridge.c:1381:fuse_err_cbk] 0-glusterfs-fuse: 3673862: FSYNC() ERR => -1 (Transport endpoint is not connected) [2018-07-11 19:16:16.372807] W [fuse-bridge.c:871:fuse_attr_cbk] 0-glusterfs-fuse: 3673863: FSTAT() /9f87196f-17eb-4b9a-9f7f-adb377255b50/images/45e8d993-aa03-4905-b56c-5ead8e3a59a6/3e91c130-6aa0-44d7-8e8c-3128badb7607 => -1 (Transport endpoint is not connected) [2018-07-11 19:16:16.372906] W [fuse-bridge.c:871:fuse_attr_cbk] 0-glusterfs-fuse: 3673864: FSTAT() /9f87196f-17eb-4b9a-9f7f-adb377255b50/images/45e8d993-aa03-4905-b56c-5ead8e3a59a6/3e91c130-6aa0-44d7-8e8c-3128badb7607 => -1 (Transport endpoint is not connected) [2018-07-11 19:16:16.377819] W [fuse-bridge.c:871:fuse_attr_cbk] 0-glusterfs-fuse: 3673865: FSTAT() /9f87196f-17eb-4b9a-9f7f-adb377255b50/images/45e8d993-aa03-4905-b56c-5ead8e3a59a6/3e91c130-6aa0-44d7-8e8c-3128badb7607 => -1 (Transport endpoint is not connected) [2018-07-11 19:16:16.378597] W [fuse-bridge.c:871:fuse_attr_cbk] 0-glusterfs-fuse: 3673866: FSTAT() /9f87196f-17eb-4b9a-9f7f-adb377255b50/images/45e8d993-aa03-4905-b56c-5ead8e3a59a6/3e91c130-6aa0-44d7-8e8c-3128badb7607 => -1 (Transport endpoint is not connected) [2018-07-11 19:16:16.378657] W [fuse-bridge.c:871:fuse_attr_cbk] 0-glusterfs-fuse: 3673867: FSTAT() /9f87196f-17eb-4b9a-9f7f-adb377255b50/images/45e8d993-aa03-4905-b56c-5ead8e3a59a6/3e91c130-6aa0-44d7-8e8c-3128badb7607 => -1 (Transport endpoint is not connected) [2018-07-11 19:16:16.387724] W [fuse-bridge.c:871:fuse_attr_cbk] 0-glusterfs-fuse: 3673868: FSTAT() /9f87196f-17eb-4b9a-9f7f-adb377255b50/images/45e8d993-aa03-4905-b56c-5ead8e3a59a6/3e91c130-6aa0-44d7-8e8c-3128badb7607 => -1 (Transport endpoint is not connected) [2018-07-11 19:16:16.866371] W [fuse-bridge.c:871:fuse_attr_cbk] 0-glusterfs-fuse: 3673869: FSTAT() /9f87196f-17eb-4b9a-9f7f-adb377255b50/dom_md/ids => -1 (Transport endpoint is not connected) [2018-07-11 19:16:17.366758] W [fuse-bridge.c:871:fuse_attr_cbk] 0-glusterfs-fuse: 3673870: FSTAT() /9f87196f-17eb-4b9a-9f7f-adb377255b50/dom_md/ids => -1 (Transport endpoint is not connected) [2018-07-11 19:16:17.867175] W [fuse-bridge.c:871:fuse_attr_cbk] 0-glusterfs-fuse: 3673871: FSTAT() /9f87196f-17eb-4b9a-9f7f-adb377255b50/dom_md/ids => -1 (Transport endpoint is not connected) [2018-07-11 19:16:18.367548] W [fuse-bridge.c:871:fuse_attr_cbk] 0-glusterfs-fuse: 3673872: FSTAT() /9f87196f-17eb-4b9a-9f7f-adb377255b50/dom_md/ids => -1 (Transport endpoint is not connected) [2018-07-11 19:16:18.378199] W [fuse-bridge.c:871:fuse_attr_cbk] 0-glusterfs-fuse: 3673873: LOOKUP() / => -1 (Transport endpoint is not connected) [2018-07-11 19:16:18.586617] W [fuse-bridge.c:871:fuse_attr_cbk] 0-glusterfs-fuse: 3673874: LOOKUP() / => -1 (Transport endpoint is not connected) [2018-07-11 19:16:18.867968] W [fuse-bridge.c:871:fuse_attr_cbk] 0-glusterfs-fuse: 3673875: FSTAT() /9f87196f-17eb-4b9a-9f7f-adb377255b50/dom_md/ids => -1 (Transport endpoint is not connected) [2018-07-11 19:16:19.368384] W [fuse-bridge.c:871:fuse_attr_cbk] 0-glusterfs-fuse: 3673876: FSTAT() /9f87196f-17eb-4b9a-9f7f-adb377255b50/dom_md/ids => -1 (Transport endpoint is not connected) [2018-07-11 19:16:19.387163] I [glusterfsd-mgmt.c:1888:mgmt_getspec_cbk] 0-glusterfs: No change in volfile,continuing [2018-07-11 19:16:19.868734] W [fuse-bridge.c:871:fuse_attr_cbk] 0-glusterfs-fuse: 3673877: FSTAT() /9f87196f-17eb-4b9a-9f7f-adb377255b50/dom_md/ids => -1 (Transport endpoint is not connected) [2018-07-11 19:16:20.369113] W [fuse-bridge.c:871:fuse_attr_cbk] 0-glusterfs-fuse: 3673878: FSTAT() /9f87196f-17eb-4b9a-9f7f-adb377255b50/dom_md/ids => -1 (Transport endpoint is not connected) [2018-07-11 19:16:20.390977] W [fuse-bridge.c:871:fuse_attr_cbk] 0-glusterfs-fuse: 3673879: LOOKUP() / => -1 (Transport endpoint is not connected) [2018-07-11 19:16:20.621602] W [fuse-bridge.c:871:fuse_attr_cbk] 0-glusterfs-fuse: 3673880: LOOKUP() / => -1 (Transport endpoint is not connected) [2018-07-11 19:16:20.869520] W [fuse-bridge.c:871:fuse_attr_cbk] 0-glusterfs-fuse: 3673881: FSTAT() /9f87196f-17eb-4b9a-9f7f-adb377255b50/dom_md/ids => -1 (Transport endpoint is not connected) [2018-07-11 19:16:20.975257] W [fuse-bridge.c:871:fuse_attr_cbk] 0-glusterfs-fuse: 3673882: LOOKUP() / => -1 (Transport endpoint is not connected) [2018-07-11 19:16:21.369905] W [fuse-bridge.c:871:fuse_attr_cbk] 0-glusterfs-fuse: 3673883: FSTAT() /9f87196f-17eb-4b9a-9f7f-adb377255b50/dom_md/ids => -1 (Transport endpoint is not connected) [2018-07-11 19:16:21.870310] W [fuse-bridge.c:871:fuse_attr_cbk] 0-glusterfs-fuse: 3673884: FSTAT() /9f87196f-17eb-4b9a-9f7f-adb377255b50/dom_md/ids => -1 (Transport endpoint is not connected) [2018-07-11 19:16:22.081291] W [fuse-bridge.c:1381:fuse_err_cbk] 0-glusterfs-fuse: 3673887: FSYNC() ERR => -1 (Transport endpoint is not connected) [2018-07-11 19:16:22.084700] W [fuse-bridge.c:1381:fuse_err_cbk] 0-glusterfs-fuse: 3673888: FSYNC() ERR => -1 (Transport endpoint is not connected) [2018-07-11 19:16:22.094940] W [fuse-bridge.c:871:fuse_attr_cbk] 0-glusterfs-fuse: 3673889: FSTAT() /9f87196f-17eb-4b9a-9f7f-adb377255b50/images/38b23e13-da8b-431f-92fa-f019ce4c1cce/e03029a3-e8c6-4e08-8d69-14d8b079f5f6 => -1 (Transport endpoint is not connected) [2018-07-11 19:16:22.370778] W [fuse-bridge.c:871:fuse_attr_cbk] 0-glusterfs-fuse: 3673890: FSTAT() /9f87196f-17eb-4b9a-9f7f-adb377255b50/dom_md/ids => -1 (Transport endpoint is not connected) [2018-07-11 19:16:22.423209] W [fuse-bridge.c:871:fuse_attr_cbk] 0-glusterfs-fuse: 3673891: LOOKUP() / => -1 (Transport endpoint is not connected) [2018-07-11 19:16:22.574824] W [fuse-bridge.c:871:fuse_attr_cbk] 0-glusterfs-fuse: 3673892: LOOKUP() / => -1 (Transport endpoint is not connected) [2018-07-11 19:16:22.871227] W [fuse-bridge.c:871:fuse_attr_cbk] 0-glusterfs-fuse: 3673893: FSTAT() /9f87196f-17eb-4b9a-9f7f-adb377255b50/dom_md/ids => -1 (Transport endpoint is not connected) [2018-07-11 19:16:23.371628] W [fuse-bridge.c:871:fuse_attr_cbk] 0-glusterfs-fuse: 3673894: FSTAT() /9f87196f-17eb-4b9a-9f7f-adb377255b50/dom_md/ids => -1 (Transport endpoint is not connected) [2018-07-11 19:16:23.872026] W [fuse-bridge.c:871:fuse_attr_cbk] 0-glusterfs-fuse: 3673895: FSTAT() /9f87196f-17eb-4b9a-9f7f-adb377255b50/dom_md/ids => -1 (Transport endpoint is not connected) [2018-07-11 19:16:24.372444] W [fuse-bridge.c:871:fuse_attr_cbk] 0-glusterfs-fuse: 3673896: FSTAT() /9f87196f-17eb-4b9a-9f7f-adb377255b50/dom_md/ids => -1 (Transport endpoint is not connected) [2018-07-11 19:16:24.436119] W [fuse-bridge.c:871:fuse_attr_cbk] 0-glusterfs-fuse: 3673897: LOOKUP() / => -1 (Transport endpoint is not connected) [2018-07-11 19:16:24.872850] W [fuse-bridge.c:871:fuse_attr_cbk] 0-glusterfs-fuse: 3673898: FSTAT() /9f87196f-17eb-4b9a-9f7f-adb377255b50/dom_md/ids => -1 (Transport endpoint is not connected) [2018-07-11 19:16:25.373254] W [fuse-bridge.c:871:fuse_attr_cbk] 0-glusterfs-fuse: 3673899: FSTAT() /9f87196f-17eb-4b9a-9f7f-adb377255b50/dom_md/ids => -1 (Transport endpoint is not connected) [2018-07-11 19:16:25.873651] W [fuse-bridge.c:871:fuse_attr_cbk] 0-glusterfs-fuse: 3673900: FSTAT() /9f87196f-17eb-4b9a-9f7f-adb377255b50/dom_md/ids => -1 (Transport endpoint is not connected) The message "I [MSGID: 108006] [afr-common.c:5372:afr_local_init] 0-gv1-replicate-0: no subvolumes up" repeated 47 times between [2018-07-11 19:16:16.369427] and [2018-07-11 19:16:25.873639] [2018-07-11 19:16:26.039513] E [MSGID: 114058] [client-handshake.c:1565:client_query_portmap_cbk] 0-gv1-client-2: failed to get the port number for remote subvolume. Please run 'gluster volume status' on server to see if brick process is running. [2018-07-11 19:16:26.039611] I [MSGID: 114018] [client.c:2285:client_rpc_notify] 0-gv1-client-2: disconnected from gv1-client-2. Client process will keep trying to connect to glusterd until brick's port is available [2018-07-11 19:16:26.039616] E [MSGID: 108006] [afr-common.c:5092:__afr_handle_child_down_event] 0-gv1-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up. [2018-07-11 19:16:26.361070] I [rpc-clnt.c:1986:rpc_clnt_reconfig] 0-gv1-client-4: changing port to 49154 (from 0) [2018-07-11 19:16:26.365057] I [MSGID: 114057] [client-handshake.c:1478:select_server_supported_programs] 0-gv1-client-4: Using Program GlusterFS 3.3, Num (1298437), Version (330) [2018-07-11 19:16:26.365831] I [MSGID: 114046] [client-handshake.c:1231:client_setvolume_cbk] 0-gv1-client-4: Connected to gv1-client-4, attached to remote volume '/glusterfs/data1/gv1'. [2018-07-11 19:16:26.365940] I [MSGID: 114047] [client-handshake.c:1242:client_setvolume_cbk] 0-gv1-client-4: Server and Client lk-version numbers are not same, reopening the fds [2018-07-11 19:16:26.365971] I [MSGID: 114042] [client-handshake.c:1047:client_post_handshake] 0-gv1-client-4: 5 fds open - Delaying child_up until they are re-opened [2018-07-11 19:16:26.366730] I [MSGID: 108006] [afr-common.c:5372:afr_local_init] 0-gv1-replicate-0: no subvolumes up [2018-07-11 19:16:26.366813] W [fuse-bridge.c:871:fuse_attr_cbk] 0-glusterfs-fuse: 3673901: LOOKUP() / => -1 (Transport endpoint is not connected) [2018-07-11 19:16:26.366803] I [MSGID: 108006] [afr-common.c:5372:afr_local_init] 0-gv1-replicate-0: no subvolumes up [2018-07-11 19:16:26.366887] I [MSGID: 114041] [client-handshake.c:678:client_child_up_reopen_done] 0-gv1-client-4: last fd open'd/lock-self-heal'd - notifying CHILD-UP [2018-07-11 19:16:26.366952] I [MSGID: 108005] [afr-common.c:5015:__afr_handle_child_up_event] 0-gv1-replicate-0: Subvolume 'gv1-client-4' came back up; going online. [2018-07-11 19:16:26.367114] I [MSGID: 114035] [client-handshake.c:202:client_set_lk_version_cbk] 0-gv1-client-4: Server lk version = 1 [2018-07-11 19:16:26.378705] W [fuse-bridge.c:2402:fuse_writev_cbk] 0-glusterfs-fuse: 3673912: WRITE => -1 gfid=de4eb8bc-f5fa-4da8-b907-1562cbd38dfe fd=0x7fd870001790 (Read-only file system) [2018-07-11 19:16:26.882546] W [fuse-bridge.c:2402:fuse_writev_cbk] 0-glusterfs-fuse: 3673939: WRITE => -1 gfid=de4eb8bc-f5fa-4da8-b907-1562cbd38dfe fd=0x7fd870001790 (Read-only file system) [2018-07-11 19:16:27.364149] I [rpc-clnt.c:1986:rpc_clnt_reconfig] 0-gv1-client-3: changing port to 49152 (from 0) [2018-07-11 19:16:27.367908] I [MSGID: 114057] [client-handshake.c:1478:select_server_supported_programs] 0-gv1-client-3: Using Program GlusterFS 3.3, Num (1298437), Version (330) [2018-07-11 19:16:27.368659] I [MSGID: 114046] [client-handshake.c:1231:client_setvolume_cbk] 0-gv1-client-3: Connected to gv1-client-3, attached to remote volume '/glusterfs/data1/gv1'. [2018-07-11 19:16:27.368698] I [MSGID: 114047] [client-handshake.c:1242:client_setvolume_cbk] 0-gv1-client-3: Server and Client lk-version numbers are not same, reopening the fds [2018-07-11 19:16:27.368716] I [MSGID: 114042] [client-handshake.c:1047:client_post_handshake] 0-gv1-client-3: 5 fds open - Delaying child_up until they are re-opened [2018-07-11 19:16:27.369663] I [MSGID: 114041] [client-handshake.c:678:client_child_up_reopen_done] 0-gv1-client-3: last fd open'd/lock-self-heal'd - notifying CHILD-UP [2018-07-11 19:16:27.369729] I [MSGID: 108002] [afr-common.c:5312:afr_notify] 0-gv1-replicate-0: Client-quorum is met [2018-07-11 19:16:27.369853] I [MSGID: 114035] [client-handshake.c:202:client_set_lk_version_cbk] 0-gv1-client-3: Server lk version = 1 [2018-07-11 19:16:27.369853] I [MSGID: 114035] [client-handshake.c:202:client_set_lk_version_cbk] 0-gv1-client-3: Server lk version = 1 [2018-07-11 19:16:31.575635] I [rpc-clnt.c:1986:rpc_clnt_reconfig] 0-gv1-client-2: changing port to 49152 (from 0) [2018-07-11 19:16:31.580371] I [MSGID: 114057] [client-handshake.c:1478:select_server_supported_programs] 0-gv1-client-2: Using Program GlusterFS 3.3, Num (1298437), Version (330) [2018-07-11 19:16:31.581181] I [MSGID: 114046] [client-handshake.c:1231:client_setvolume_cbk] 0-gv1-client-2: Connected to gv1-client-2, attached to remote volume '/glusterfs/data1/gv1'. [2018-07-11 19:16:31.581213] I [MSGID: 114047] [client-handshake.c:1242:client_setvolume_cbk] 0-gv1-client-2: Server and Client lk-version numbers are not same, reopening the fds [2018-07-11 19:16:31.581230] I [MSGID: 114042] [client-handshake.c:1047:client_post_handshake] 0-gv1-client-2: 5 fds open - Delaying child_up until they are re-opened [2018-07-11 19:16:31.582124] I [MSGID: 114041] [client-handshake.c:678:client_child_up_reopen_done] 0-gv1-client-2: last fd open'd/lock-self-heal'd - notifying CHILD-UP [2018-07-11 19:16:31.582476] I [MSGID: 114035] [client-handshake.c:202:client_set_lk_version_cbk] 0-gv1-client-2: Server lk version = 1 [2018-07-11 19:16:36.388675] W [fuse-bridge.c:1381:fuse_err_cbk] 0-glusterfs-fuse: 3674028: FSYNC() ERR => -1 (Success) [2018-07-11 19:16:36.414829] W [fuse-bridge.c:1381:fuse_err_cbk] 0-glusterfs-fuse: 3674029: FSYNC() ERR => -1 (Success) The message "E [MSGID: 114031] [client-rpc-fops.c:1557:client3_3_finodelk_cbk] 0-gv1-client-2: remote operation failed [Transport endpoint is not connected]" repeated 9 times between [2018-07-11 19:16:14.920207] and [2018-07-11 19:16:16.363900]
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users