Kindly, check the attached new log file, i dont know if it's helpful or not but, i couldn't find the log with the name you just described.
--
Respectfully
Mahdi A. Mahdi
Respectfully
Mahdi A. Mahdi
From: Krutika Dhananjay <kdhananj@xxxxxxxxxx>
Sent: Saturday, March 18, 2017 6:10:40 PM
To: Mahdi Adnan
Cc: gluster-users@xxxxxxxxxxx
Subject: Re: Gluster 3.8.10 rebalance VMs corruption
Sent: Saturday, March 18, 2017 6:10:40 PM
To: Mahdi Adnan
Cc: gluster-users@xxxxxxxxxxx
Subject: Re: Gluster 3.8.10 rebalance VMs corruption
mnt-disk11-vmware2.log seems like a brick log. Could you attach the fuse mount logs? It should be right under /var/log/glusterfs/ directory
named after the mount point name, only hyphenated.
On Sat, Mar 18, 2017 at 7:27 PM, Mahdi Adnan
<mahdi.adnan@xxxxxxxxxxx> wrote:
Hello Krutika,
Kindly, check the attached logs.
--
Respectfully
Mahdi A. Mahdi
From: Krutika Dhananjay <kdhananj@xxxxxxxxxx>
Sent: Saturday, March 18, 2017 3:29:03 PM
To: Mahdi Adnan
Cc: gluster-users@xxxxxxxxxxx
Subject: Re: Gluster 3.8.10 rebalance VMs corruption-KrutikaHi Mahdi,Could you attach mount, brick and rebalance logs?
On Sat, Mar 18, 2017 at 12:14 AM, Mahdi Adnan <mahdi.adnan@xxxxxxxxxxx> wrote:
Hi,
I have upgraded to Gluster 3.8.10 today and ran the add-brick procedure in a volume contains few VMs.After the completion of rebalance, i have rebooted the VMs, some of ran just fine, and others just crashed.Windows boot to recovery mode and Linux throw xfs errors and does not boot.I ran the test again and it happened just as the first one, but i have noticed only VMs doing disk IOs are affected by this bug.The VMs in power off mode started fine and even md5 of the disk file did not change after the rebalance.
anyone else can confirm this ?
Volume info:Volume Name: vmware2Type: Distributed-ReplicateVolume ID: 02328d46-a285-4533-aa3a-fb9bfeb688bf Status: StartedSnapshot Count: 0Number of Bricks: 22 x 2 = 44Transport-type: tcpBricks:Brick1: gluster01:/mnt/disk1/vmware2Brick2: gluster03:/mnt/disk1/vmware2Brick3: gluster02:/mnt/disk1/vmware2Brick4: gluster04:/mnt/disk1/vmware2Brick5: gluster01:/mnt/disk2/vmware2Brick6: gluster03:/mnt/disk2/vmware2Brick7: gluster02:/mnt/disk2/vmware2Brick8: gluster04:/mnt/disk2/vmware2Brick9: gluster01:/mnt/disk3/vmware2Brick10: gluster03:/mnt/disk3/vmware2Brick11: gluster02:/mnt/disk3/vmware2Brick12: gluster04:/mnt/disk3/vmware2Brick13: gluster01:/mnt/disk4/vmware2Brick14: gluster03:/mnt/disk4/vmware2Brick15: gluster02:/mnt/disk4/vmware2Brick16: gluster04:/mnt/disk4/vmware2Brick17: gluster01:/mnt/disk5/vmware2Brick18: gluster03:/mnt/disk5/vmware2Brick19: gluster02:/mnt/disk5/vmware2Brick20: gluster04:/mnt/disk5/vmware2Brick21: gluster01:/mnt/disk6/vmware2Brick22: gluster03:/mnt/disk6/vmware2Brick23: gluster02:/mnt/disk6/vmware2Brick24: gluster04:/mnt/disk6/vmware2Brick25: gluster01:/mnt/disk7/vmware2Brick26: gluster03:/mnt/disk7/vmware2Brick27: gluster02:/mnt/disk7/vmware2Brick28: gluster04:/mnt/disk7/vmware2Brick29: gluster01:/mnt/disk8/vmware2Brick30: gluster03:/mnt/disk8/vmware2Brick31: gluster02:/mnt/disk8/vmware2Brick32: gluster04:/mnt/disk8/vmware2Brick33: gluster01:/mnt/disk9/vmware2Brick34: gluster03:/mnt/disk9/vmware2Brick35: gluster02:/mnt/disk9/vmware2Brick36: gluster04:/mnt/disk9/vmware2Brick37: gluster01:/mnt/disk10/vmware2Brick38: gluster03:/mnt/disk10/vmware2Brick39: gluster02:/mnt/disk10/vmware2Brick40: gluster04:/mnt/disk10/vmware2Brick41: gluster01:/mnt/disk11/vmware2Brick42: gluster03:/mnt/disk11/vmware2Brick43: gluster02:/mnt/disk11/vmware2Brick44: gluster04:/mnt/disk11/vmware2Options Reconfigured:cluster.server-quorum-type: servernfs.disable: onperformance.readdir-ahead: ontransport.address-family: inetperformance.quick-read: offperformance.read-ahead: offperformance.io-cache: offperformance.stat-prefetch: offcluster.eager-lock: enablenetwork.remote-dio: enablefeatures.shard: oncluster.data-self-heal-algorithm: full features.cache-invalidation: onganesha.enable: onfeatures.shard-block-size: 256MBclient.event-threads: 2server.event-threads: 2cluster.favorite-child-policy: sizestorage.build-pgfid: offnetwork.ping-timeout: 5cluster.enable-shared-storage: enablenfs-ganesha: enablecluster.server-quorum-ratio: 51%
Adding bricks:gluster volume add-brick vmware2 replica 2 gluster01:/mnt/disk11/vmware2 gluster03:/mnt/disk11/vmware2 gluster02:/mnt/disk11/vmware2 gluster04:/mnt/disk11/vmware2
starting fix layout:gluster volume rebalance vmware2 fix-layout start
Starting rebalance:gluster volume rebalance vmware2 start
--
Respectfully
Mahdi A. Mahdi
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users
[2017-03-16 20:41:52.668396] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/ovirt_disk4/ovirt_vol03 on port 49499 [2017-03-16 20:41:52.701466] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/ovirt_disk1/ovirt_vol03 on port 49496 [2017-03-16 20:41:52.737534] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/ovirt_disk8/ovirt_vol03 on port 49503 [2017-03-16 20:41:52.801325] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/disk3/vmware2 on port 49471 [2017-03-16 20:41:52.835468] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/disk7/vmware2 on port 49475 [2017-03-16 20:41:52.914617] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/disk9/vmware2 on port 49477 [2017-03-16 20:41:52.948999] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/disk2/vmware2 on port 49470 [2017-03-16 20:41:52.967000] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/disk8/vmware2 on port 49476 [2017-03-16 20:41:52.975660] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/ovirt_disk2/ovirt_vol03 on port 49497 [2017-03-16 20:41:53.003684] I [MSGID: 106493] [glusterd-rpc-ops.c:476:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: 3e37013d-4750-403e-bf02-305e34546d58, host: gluster04, port: 0 [2017-03-16 20:41:53.009438] C [MSGID: 106003] [glusterd-server-quorum.c:341:glusterd_do_volume_quorum_action] 0-management: Server quorum regained for volume ovirt_vol03. Starting local bricks. [2017-03-16 20:41:53.009640] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-16 20:41:53.009836] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-16 20:41:53.010015] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-16 20:41:53.010180] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-16 20:41:53.010355] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-16 20:41:53.010518] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-16 20:41:53.010692] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-16 20:41:53.010856] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-16 20:41:53.010958] C [MSGID: 106003] [glusterd-server-quorum.c:341:glusterd_do_volume_quorum_action] 0-management: Server quorum regained for volume vmware2. Starting local bricks. [2017-03-16 20:41:53.011052] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-16 20:41:53.011214] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-16 20:41:53.011391] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-16 20:41:53.011549] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-16 20:41:53.011700] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-16 20:41:53.011856] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-16 20:41:53.012013] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-16 20:41:53.012168] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-16 20:41:53.012334] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-16 20:41:53.012496] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-16 20:41:53.012637] I [MSGID: 106492] [glusterd-handler.c:2784:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 3e37013d-4750-403e-bf02-305e34546d58 [2017-03-16 20:41:53.012694] I [MSGID: 106502] [glusterd-handler.c:2829:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2017-03-16 20:41:53.047531] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-nfs: setting frame-timeout to 600 [2017-03-16 20:41:53.047789] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: nfs already stopped [2017-03-16 20:41:53.047896] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: nfs service is stopped [2017-03-16 20:41:53.048731] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-glustershd: setting frame-timeout to 600 [2017-03-16 20:41:53.055044] I [MSGID: 106568] [glusterd-proc-mgmt.c:87:glusterd_proc_stop] 0-management: Stopping glustershd daemon running in pid: 12740 [2017-03-16 20:41:54.055221] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: glustershd service is stopped [2017-03-16 20:41:54.055326] I [MSGID: 106567] [glusterd-svc-mgmt.c:196:glusterd_svc_start] 0-management: Starting glustershd service [2017-03-16 20:41:54.059691] W [socket.c:3075:socket_connect] 0-glustershd: Ignore failed connection attempt on , (No such file or directory) [2017-03-16 20:41:54.059986] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-quotad: setting frame-timeout to 600 [2017-03-16 20:41:54.060430] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad already stopped [2017-03-16 20:41:54.060465] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: quotad service is stopped [2017-03-16 20:41:54.060531] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-bitd: setting frame-timeout to 600 [2017-03-16 20:41:54.060879] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2017-03-16 20:41:54.060901] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: bitd service is stopped [2017-03-16 20:41:54.060962] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-scrub: setting frame-timeout to 600 [2017-03-16 20:41:54.061306] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2017-03-16 20:41:54.061334] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: scrub service is stopped [2017-03-16 20:41:54.061440] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-16 20:41:54.062181] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-snapd: setting frame-timeout to 600 [2017-03-16 20:41:54.062418] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-snapd: setting frame-timeout to 600 [2017-03-16 20:41:54.062625] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-snapd: setting frame-timeout to 600 [2017-03-16 20:41:54.065261] I [MSGID: 106493] [glusterd-rpc-ops.c:691:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 3e37013d-4750-403e-bf02-305e34546d58 [2017-03-16 20:41:54.065452] I [MSGID: 106493] [glusterd-rpc-ops.c:476:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: ab091583-d6ee-48be-b0b4-99e1aabd843f, host: gluster02, port: 0 [2017-03-16 20:41:54.066916] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: nfs already stopped [2017-03-16 20:41:54.066976] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: nfs service is stopped [2017-03-16 20:41:54.073300] I [MSGID: 106568] [glusterd-proc-mgmt.c:87:glusterd_proc_stop] 0-management: Stopping glustershd daemon running in pid: 39261 [2017-03-16 20:41:55.073464] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: glustershd service is stopped [2017-03-16 20:41:55.073558] I [MSGID: 106567] [glusterd-svc-mgmt.c:196:glusterd_svc_start] 0-management: Starting glustershd service [2017-03-16 20:41:55.078973] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad already stopped [2017-03-16 20:41:55.079037] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: quotad service is stopped [2017-03-16 20:41:55.079244] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2017-03-16 20:41:55.079291] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: bitd service is stopped [2017-03-16 20:41:55.079490] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2017-03-16 20:41:55.079511] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: scrub service is stopped [2017-03-16 20:41:55.080244] I [MSGID: 106493] [glusterd-rpc-ops.c:476:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: 0b388c89-ea8b-4e6a-8649-1d870d2bf3bc, host: gluster03, port: 0 [2017-03-16 20:41:55.083207] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: nfs already stopped [2017-03-16 20:41:55.083276] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: nfs service is stopped [2017-03-16 20:41:55.089619] I [MSGID: 106568] [glusterd-proc-mgmt.c:87:glusterd_proc_stop] 0-management: Stopping glustershd daemon running in pid: 39282 [2017-03-16 20:41:56.089796] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: glustershd service is stopped [2017-03-16 20:41:56.089870] I [MSGID: 106567] [glusterd-svc-mgmt.c:196:glusterd_svc_start] 0-management: Starting glustershd service [2017-03-16 20:41:56.094897] W [socket.c:3075:socket_connect] 0-glustershd: Ignore failed connection attempt on , (No such file or directory) [2017-03-16 20:41:56.095184] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad already stopped [2017-03-16 20:41:56.095221] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: quotad service is stopped [2017-03-16 20:41:56.095447] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2017-03-16 20:41:56.095475] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: bitd service is stopped [2017-03-16 20:41:56.095670] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2017-03-16 20:41:56.095691] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: scrub service is stopped [2017-03-16 20:41:56.100468] I [MSGID: 106163] [glusterd-handshake.c:1271:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 30800 [2017-03-16 20:41:56.101761] I [MSGID: 106493] [glusterd-rpc-ops.c:691:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: ab091583-d6ee-48be-b0b4-99e1aabd843f [2017-03-16 20:41:56.101821] I [MSGID: 106493] [glusterd-rpc-ops.c:691:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 0b388c89-ea8b-4e6a-8649-1d870d2bf3bc [2017-03-16 20:41:56.101884] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/ovirt_disk5/ovirt_vol03 on port 49500 [2017-03-16 20:41:56.103744] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/disk4/vmware2 on port 49472 [2017-03-16 20:41:56.103890] I [MSGID: 106492] [glusterd-handler.c:2784:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: ab091583-d6ee-48be-b0b4-99e1aabd843f [2017-03-16 20:41:56.103931] I [MSGID: 106502] [glusterd-handler.c:2829:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2017-03-16 20:41:56.107707] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/disk6/vmware2 on port 49474 [2017-03-16 20:41:56.107822] I [MSGID: 106492] [glusterd-handler.c:2784:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 0b388c89-ea8b-4e6a-8649-1d870d2bf3bc [2017-03-16 20:41:56.107859] I [MSGID: 106502] [glusterd-handler.c:2829:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2017-03-16 20:41:56.109832] I [MSGID: 106163] [glusterd-handshake.c:1271:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 30800 [2017-03-16 20:41:56.110728] I [MSGID: 106492] [glusterd-handler.c:2784:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 0b388c89-ea8b-4e6a-8649-1d870d2bf3bc [2017-03-16 20:41:56.110781] I [MSGID: 106502] [glusterd-handler.c:2829:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2017-03-16 20:41:56.112896] I [MSGID: 106163] [glusterd-handshake.c:1271:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 30800 [2017-03-16 20:41:56.118666] I [MSGID: 106490] [glusterd-handler.c:2608:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: 3e37013d-4750-403e-bf02-305e34546d58 [2017-03-16 20:41:56.120384] I [MSGID: 106493] [glusterd-handler.c:3852:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to gluster04 (0), ret: 0, op_ret: 0 [2017-03-16 20:41:56.122443] I [MSGID: 106492] [glusterd-handler.c:2784:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 3e37013d-4750-403e-bf02-305e34546d58 [2017-03-16 20:41:56.122486] I [MSGID: 106502] [glusterd-handler.c:2829:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2017-03-16 20:41:56.124087] I [MSGID: 106490] [glusterd-handler.c:2608:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: ab091583-d6ee-48be-b0b4-99e1aabd843f [2017-03-16 20:41:56.125429] I [MSGID: 106493] [glusterd-handler.c:3852:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to gluster02 (0), ret: 0, op_ret: 0 [2017-03-16 20:41:56.127318] I [MSGID: 106493] [glusterd-rpc-ops.c:691:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 3e37013d-4750-403e-bf02-305e34546d58 [2017-03-16 20:41:56.127365] I [MSGID: 106492] [glusterd-handler.c:2784:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: ab091583-d6ee-48be-b0b4-99e1aabd843f [2017-03-16 20:41:56.127394] I [MSGID: 106502] [glusterd-handler.c:2829:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2017-03-16 20:41:56.128919] I [MSGID: 106493] [glusterd-rpc-ops.c:691:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: ab091583-d6ee-48be-b0b4-99e1aabd843f [2017-03-16 20:41:56.130949] I [MSGID: 106490] [glusterd-handler.c:2608:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: 0b388c89-ea8b-4e6a-8649-1d870d2bf3bc [2017-03-16 20:41:56.132533] I [MSGID: 106493] [glusterd-handler.c:3852:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to gluster03 (0), ret: 0, op_ret: 0 [2017-03-16 20:41:56.134557] I [MSGID: 106492] [glusterd-handler.c:2784:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 0b388c89-ea8b-4e6a-8649-1d870d2bf3bc [2017-03-16 20:41:56.134606] I [MSGID: 106502] [glusterd-handler.c:2829:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2017-03-16 20:41:56.136166] I [MSGID: 106493] [glusterd-rpc-ops.c:691:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 0b388c89-ea8b-4e6a-8649-1d870d2bf3bc [2017-03-16 20:42:01.947376] W [glusterfsd.c:1327:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7dc5) [0x7f442abecdc5] -->/usr/sbin/glusterd(glusterfs_sigwaiter+0xe5) [0x7f442c27ecd5] -->/usr/sbin/glusterd(cleanup_and_exit+0x6b) [0x7f442c27eb4b] ) 0-: received signum (15), shutting down [2017-03-16 20:42:01.981631] I [MSGID: 100030] [glusterfsd.c:2454:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.8.10 (args: /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO) [2017-03-16 20:42:01.987525] I [MSGID: 106478] [glusterd.c:1379:init] 0-management: Maximum allowed open file descriptors set to 65536 [2017-03-16 20:42:01.987573] I [MSGID: 106479] [glusterd.c:1428:init] 0-management: Using /var/lib/glusterd as working directory [2017-03-16 20:42:01.994363] E [rpc-transport.c:287:rpc_transport_load] 0-rpc-transport: /usr/lib64/glusterfs/3.8.10/rpc-transport/rdma.so: cannot open shared object file: No such file or directory [2017-03-16 20:42:01.994388] W [rpc-transport.c:291:rpc_transport_load] 0-rpc-transport: volume 'rdma.management': transport-type 'rdma' is not valid or not found on this machine [2017-03-16 20:42:01.994399] W [rpcsvc.c:1638:rpcsvc_create_listener] 0-rpc-service: cannot create listener, initing the transport failed [2017-03-16 20:42:01.994408] E [MSGID: 106243] [glusterd.c:1652:init] 0-management: creation of 1 listeners failed, continuing with succeeded transport [2017-03-16 20:42:01.995645] I [MSGID: 106228] [glusterd.c:429:glusterd_check_gsync_present] 0-glusterd: geo-replication module not installed in the system [No such file or directory] [2017-03-16 20:42:01.996387] I [MSGID: 106513] [glusterd-store.c:2098:glusterd_restore_op_version] 0-glusterd: retrieved op-version: 30800 [2017-03-16 20:42:02.059263] I [MSGID: 106498] [glusterd-handler.c:3649:glusterd_friend_add_from_peerinfo] 0-management: connect returned 0 [2017-03-16 20:42:02.059461] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-16 20:42:02.064605] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-16 20:42:02.069102] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 The message "I [MSGID: 106498] [glusterd-handler.c:3649:glusterd_friend_add_from_peerinfo] 0-management: connect returned 0" repeated 2 times between [2017-03-16 20:42:02.059263] and [2017-03-16 20:42:02.059397] [2017-03-16 20:42:02.075417] I [MSGID: 106544] [glusterd.c:155:glusterd_uuid_init] 0-management: retrieved UUID: f15808d9-ab37-4126-b4e4-1d14011e4e0f Final graph: +------------------------------------------------------------------------------+ 1: volume management 2: type mgmt/glusterd 3: option rpc-auth.auth-glusterfs on 4: option rpc-auth.auth-unix on 5: option rpc-auth.auth-null on 6: option rpc-auth-allow-insecure on 7: option transport.socket.listen-backlog 128 8: option event-threads 1 9: option ping-timeout 0 10: option transport.socket.read-fail-log off 11: option transport.socket.keepalive-interval 2 12: option transport.socket.keepalive-time 10 13: option transport-type rdma 14: option working-directory /var/lib/glusterd 15: end-volume 16: +------------------------------------------------------------------------------+ [2017-03-16 20:42:02.107651] I [MSGID: 101190] [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1 [2017-03-16 20:42:11.981208] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/ovirt_disk2/ovirt_vol03 on port 49497 [2017-03-16 20:42:12.010618] I [MSGID: 106493] [glusterd-rpc-ops.c:476:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: 3e37013d-4750-403e-bf02-305e34546d58, host: gluster04, port: 0 [2017-03-16 20:42:12.011985] C [MSGID: 106003] [glusterd-server-quorum.c:341:glusterd_do_volume_quorum_action] 0-management: Server quorum regained for volume ovirt_vol03. Starting local bricks. [2017-03-16 20:42:12.012225] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-16 20:42:12.012646] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-16 20:42:12.012824] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-16 20:42:12.012978] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-16 20:42:12.013123] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-16 20:42:12.013278] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-16 20:42:12.013435] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-16 20:42:12.013587] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-16 20:42:12.013697] C [MSGID: 106003] [glusterd-server-quorum.c:341:glusterd_do_volume_quorum_action] 0-management: Server quorum regained for volume vmware2. Starting local bricks. [2017-03-16 20:42:12.013792] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-16 20:42:12.013951] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-16 20:42:12.014102] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-16 20:42:12.014263] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-16 20:42:12.014443] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-16 20:42:12.014605] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-16 20:42:12.014776] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-16 20:42:12.014927] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-16 20:42:12.015091] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-16 20:42:12.015243] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-16 20:42:12.017602] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-nfs: setting frame-timeout to 600 [2017-03-16 20:42:12.017819] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: nfs already stopped [2017-03-16 20:42:12.017870] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: nfs service is stopped [2017-03-16 20:42:12.018524] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-glustershd: setting frame-timeout to 600 [2017-03-16 20:42:12.024309] I [MSGID: 106568] [glusterd-proc-mgmt.c:87:glusterd_proc_stop] 0-management: Stopping glustershd daemon running in pid: 39289 [2017-03-16 20:42:13.024508] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: glustershd service is stopped [2017-03-16 20:42:13.024601] I [MSGID: 106567] [glusterd-svc-mgmt.c:196:glusterd_svc_start] 0-management: Starting glustershd service [2017-03-16 20:42:13.029157] W [socket.c:3075:socket_connect] 0-glustershd: Ignore failed connection attempt on , (No such file or directory) [2017-03-16 20:42:13.029501] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-quotad: setting frame-timeout to 600 [2017-03-16 20:42:13.029957] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad already stopped [2017-03-16 20:42:13.029991] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: quotad service is stopped [2017-03-16 20:42:13.030048] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-bitd: setting frame-timeout to 600 [2017-03-16 20:42:13.030401] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2017-03-16 20:42:13.030432] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: bitd service is stopped [2017-03-16 20:42:13.030485] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-scrub: setting frame-timeout to 600 [2017-03-16 20:42:13.030814] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2017-03-16 20:42:13.030838] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: scrub service is stopped [2017-03-16 20:42:13.030938] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-16 20:42:13.031701] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-snapd: setting frame-timeout to 600 [2017-03-16 20:42:13.031905] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-snapd: setting frame-timeout to 600 [2017-03-16 20:42:13.032086] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-snapd: setting frame-timeout to 600 [2017-03-16 20:42:13.032351] I [MSGID: 106492] [glusterd-handler.c:2784:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 3e37013d-4750-403e-bf02-305e34546d58 [2017-03-16 20:42:13.032443] I [MSGID: 106502] [glusterd-handler.c:2829:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2017-03-16 20:42:13.233491] I [MSGID: 106493] [glusterd-rpc-ops.c:476:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: ab091583-d6ee-48be-b0b4-99e1aabd843f, host: gluster02, port: 0 [2017-03-16 20:42:13.234638] I [MSGID: 106493] [glusterd-rpc-ops.c:476:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: 0b388c89-ea8b-4e6a-8649-1d870d2bf3bc, host: gluster03, port: 0 [2017-03-16 20:42:13.235790] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: nfs already stopped [2017-03-16 20:42:13.235857] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: nfs service is stopped [2017-03-16 20:42:13.241946] I [MSGID: 106568] [glusterd-proc-mgmt.c:87:glusterd_proc_stop] 0-management: Stopping glustershd daemon running in pid: 45100 [2017-03-16 20:42:14.242138] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: glustershd service is stopped [2017-03-16 20:42:14.242237] I [MSGID: 106567] [glusterd-svc-mgmt.c:196:glusterd_svc_start] 0-management: Starting glustershd service [2017-03-16 20:42:14.246692] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad already stopped [2017-03-16 20:42:14.246749] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: quotad service is stopped [2017-03-16 20:42:14.246951] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2017-03-16 20:42:14.246977] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: bitd service is stopped [2017-03-16 20:42:14.247166] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2017-03-16 20:42:14.247190] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: scrub service is stopped [2017-03-16 20:42:14.250622] I [MSGID: 106493] [glusterd-rpc-ops.c:691:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 3e37013d-4750-403e-bf02-305e34546d58 [2017-03-16 20:42:14.252602] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: nfs already stopped [2017-03-16 20:42:14.252664] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: nfs service is stopped [2017-03-16 20:42:14.257814] I [MSGID: 106568] [glusterd-proc-mgmt.c:87:glusterd_proc_stop] 0-management: Stopping glustershd daemon running in pid: 45929 [2017-03-16 20:42:15.257997] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: glustershd service is stopped [2017-03-16 20:42:15.258085] I [MSGID: 106567] [glusterd-svc-mgmt.c:196:glusterd_svc_start] 0-management: Starting glustershd service [2017-03-16 20:42:15.262505] W [socket.c:3075:socket_connect] 0-glustershd: Ignore failed connection attempt on , (No such file or directory) [2017-03-16 20:42:15.262789] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad already stopped [2017-03-16 20:42:15.262829] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: quotad service is stopped [2017-03-16 20:42:15.262992] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2017-03-16 20:42:15.263008] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: bitd service is stopped [2017-03-16 20:42:15.263135] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2017-03-16 20:42:15.263149] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: scrub service is stopped [2017-03-16 20:42:15.263769] I [MSGID: 106493] [glusterd-rpc-ops.c:691:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: ab091583-d6ee-48be-b0b4-99e1aabd843f [2017-03-16 20:42:15.263873] I [MSGID: 106493] [glusterd-rpc-ops.c:691:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 0b388c89-ea8b-4e6a-8649-1d870d2bf3bc [2017-03-16 20:42:15.263927] I [MSGID: 106163] [glusterd-handshake.c:1271:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 30800 [2017-03-16 20:42:15.267104] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/ovirt_disk5/ovirt_vol03 on port 49500 [2017-03-16 20:42:15.269055] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/disk4/vmware2 on port 49472 [2017-03-16 20:42:15.269227] I [MSGID: 106492] [glusterd-handler.c:2784:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: ab091583-d6ee-48be-b0b4-99e1aabd843f [2017-03-16 20:42:15.269290] I [MSGID: 106502] [glusterd-handler.c:2829:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2017-03-16 20:42:15.273005] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/disk6/vmware2 on port 49474 [2017-03-16 20:42:15.273162] I [MSGID: 106492] [glusterd-handler.c:2784:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 0b388c89-ea8b-4e6a-8649-1d870d2bf3bc [2017-03-16 20:42:15.273201] I [MSGID: 106502] [glusterd-handler.c:2829:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2017-03-16 20:42:15.274981] I [MSGID: 106163] [glusterd-handshake.c:1271:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 30800 [2017-03-16 20:42:15.277519] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/ovirt_disk6/ovirt_vol03 on port 49501 [2017-03-16 20:42:15.279293] I [MSGID: 106163] [glusterd-handshake.c:1271:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 30800 [2017-03-16 20:42:15.280046] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/disk5/vmware2 on port 49473 [2017-03-16 20:42:15.281997] I [MSGID: 106490] [glusterd-handler.c:2608:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: 3e37013d-4750-403e-bf02-305e34546d58 [2017-03-16 20:42:15.283839] I [MSGID: 106493] [glusterd-handler.c:3852:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to gluster04 (0), ret: 0, op_ret: 0 [2017-03-16 20:42:15.285916] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/ovirt_disk3/ovirt_vol03 on port 49498 [2017-03-16 20:42:15.287719] I [MSGID: 106490] [glusterd-handler.c:2608:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: ab091583-d6ee-48be-b0b4-99e1aabd843f [2017-03-16 20:42:15.289167] I [MSGID: 106493] [glusterd-handler.c:3852:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to gluster02 (0), ret: 0, op_ret: 0 [2017-03-16 20:42:15.291190] I [MSGID: 106492] [glusterd-handler.c:2784:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 3e37013d-4750-403e-bf02-305e34546d58 [2017-03-16 20:42:15.291235] I [MSGID: 106502] [glusterd-handler.c:2829:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2017-03-16 20:42:15.292832] I [MSGID: 106493] [glusterd-rpc-ops.c:691:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 3e37013d-4750-403e-bf02-305e34546d58 [2017-03-16 20:42:15.292876] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /var/lib/glusterd/ss_brick on port 49479 [2017-03-16 20:42:15.294559] I [MSGID: 106492] [glusterd-handler.c:2784:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: ab091583-d6ee-48be-b0b4-99e1aabd843f [2017-03-16 20:42:15.294600] I [MSGID: 106502] [glusterd-handler.c:2829:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2017-03-16 20:42:15.296127] I [MSGID: 106493] [glusterd-rpc-ops.c:691:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: ab091583-d6ee-48be-b0b4-99e1aabd843f [2017-03-16 20:42:15.296196] I [MSGID: 106490] [glusterd-handler.c:2608:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: 0b388c89-ea8b-4e6a-8649-1d870d2bf3bc [2017-03-16 20:42:15.297578] I [MSGID: 106493] [glusterd-handler.c:3852:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to gluster03 (0), ret: 0, op_ret: 0 [2017-03-16 20:42:15.299484] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/ovirt_disk7/ovirt_vol03 on port 49502 [2017-03-16 20:42:15.301193] I [MSGID: 106492] [glusterd-handler.c:2784:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 0b388c89-ea8b-4e6a-8649-1d870d2bf3bc [2017-03-16 20:42:15.301234] I [MSGID: 106502] [glusterd-handler.c:2829:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2017-03-16 20:42:15.302667] I [MSGID: 106493] [glusterd-rpc-ops.c:691:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 0b388c89-ea8b-4e6a-8649-1d870d2bf3bc [2017-03-16 20:42:15.302708] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/disk1/vmware2 on port 49469 [2017-03-16 20:42:15.304475] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/disk10/vmware2 on port 49478 [2017-03-16 20:42:15.306126] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/ovirt_disk4/ovirt_vol03 on port 49499 [2017-03-16 20:42:15.307755] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/ovirt_disk1/ovirt_vol03 on port 49496 [2017-03-16 20:42:15.309383] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/ovirt_disk8/ovirt_vol03 on port 49503 [2017-03-16 20:42:15.311005] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/disk3/vmware2 on port 49471 [2017-03-16 20:42:15.312627] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/disk7/vmware2 on port 49475 [2017-03-16 20:42:15.315887] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/disk9/vmware2 on port 49477 [2017-03-16 20:42:15.317496] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/disk2/vmware2 on port 49470 [2017-03-16 20:42:15.319101] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/disk8/vmware2 on port 49476 [2017-03-16 20:45:26.738292] W [socket.c:590:__socket_rwv] 0-management: readv on 192.168.209.195:24007 failed (No data available) [2017-03-16 20:45:26.738397] I [MSGID: 106004] [glusterd-handler.c:5219:__glusterd_peer_rpc_notify] 0-management: Peer <gluster02> (<ab091583-d6ee-48be-b0b4-99e1aabd843f>), in state <Peer in Cluster>, has disconnected from glusterd. [2017-03-16 20:45:26.738515] W [glusterd-locks.c:675:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x1deac) [0x7f4e11d69eac] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x27a58) [0x7f4e11d73a58] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0xd097a) [0x7f4e11e1c97a] ) 0-management: Lock for vol gluster_shared_storage not held [2017-03-16 20:45:26.738534] W [MSGID: 106118] [glusterd-handler.c:5241:__glusterd_peer_rpc_notify] 0-management: Lock not released for gluster_shared_storage [2017-03-16 20:45:26.738566] W [glusterd-locks.c:675:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x1deac) [0x7f4e11d69eac] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x27a58) [0x7f4e11d73a58] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0xd097a) [0x7f4e11e1c97a] ) 0-management: Lock for vol ovirt_vol03 not held [2017-03-16 20:45:26.738580] W [MSGID: 106118] [glusterd-handler.c:5241:__glusterd_peer_rpc_notify] 0-management: Lock not released for ovirt_vol03 [2017-03-16 20:45:26.738607] W [glusterd-locks.c:675:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x1deac) [0x7f4e11d69eac] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x27a58) [0x7f4e11d73a58] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0xd097a) [0x7f4e11e1c97a] ) 0-management: Lock for vol vmware2 not held [2017-03-16 20:45:26.738620] W [MSGID: 106118] [glusterd-handler.c:5241:__glusterd_peer_rpc_notify] 0-management: Lock not released for vmware2 [2017-03-16 20:45:29.052593] I [MSGID: 106163] [glusterd-handshake.c:1271:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 30800 [2017-03-16 20:45:29.061998] I [MSGID: 106490] [glusterd-handler.c:2608:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: ab091583-d6ee-48be-b0b4-99e1aabd843f [2017-03-16 20:45:29.206342] I [MSGID: 106163] [glusterd-handshake.c:1271:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 30800 [2017-03-16 20:45:29.215482] I [MSGID: 106490] [glusterd-handler.c:2608:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: ab091583-d6ee-48be-b0b4-99e1aabd843f [2017-03-16 20:45:37.010519] I [socket.c:3475:socket_submit_reply] 0-socket.management: not connected (priv->connected = -1) [2017-03-16 20:45:37.010562] E [rpcsvc.c:1325:rpcsvc_submit_generic] 0-rpc-service: failed to submit message (XID: 0x4, Program: GlusterD svc peer, ProgVers: 2, Proc: 2) to rpc-transport (socket.management) [2017-03-16 20:45:37.010587] E [MSGID: 106430] [glusterd-utils.c:470:glusterd_submit_reply] 0-glusterd: Reply submission failed [2017-03-16 20:45:37.010613] I [MSGID: 106493] [glusterd-handler.c:3852:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to gluster02 (0), ret: -1, op_ret: 0 [2017-03-16 20:45:37.010627] E [MSGID: 106376] [glusterd-sm.c:1397:glusterd_friend_sm] 0-glusterd: handler returned: -1 [2017-03-16 20:45:37.012280] I [MSGID: 106493] [glusterd-handler.c:3852:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to gluster02 (0), ret: 0, op_ret: 0 [2017-03-16 20:45:37.017829] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: nfs already stopped [2017-03-16 20:45:37.017908] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: nfs service is stopped [2017-03-16 20:45:37.024272] I [MSGID: 106568] [glusterd-proc-mgmt.c:87:glusterd_proc_stop] 0-management: Stopping glustershd daemon running in pid: 46261 [2017-03-16 20:45:38.024458] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: glustershd service is stopped [2017-03-16 20:45:38.024530] I [MSGID: 106567] [glusterd-svc-mgmt.c:196:glusterd_svc_start] 0-management: Starting glustershd service [2017-03-16 20:45:38.031151] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad already stopped [2017-03-16 20:45:38.031241] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: quotad service is stopped [2017-03-16 20:45:38.031472] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2017-03-16 20:45:38.031502] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: bitd service is stopped [2017-03-16 20:45:38.031705] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2017-03-16 20:45:38.031728] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: scrub service is stopped [2017-03-16 20:45:38.032519] I [MSGID: 106492] [glusterd-handler.c:2784:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: ab091583-d6ee-48be-b0b4-99e1aabd843f [2017-03-16 20:45:38.032616] I [MSGID: 106502] [glusterd-handler.c:2829:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2017-03-16 20:45:38.037094] I [MSGID: 106493] [glusterd-rpc-ops.c:691:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: ab091583-d6ee-48be-b0b4-99e1aabd843f [2017-03-16 20:45:40.109910] I [MSGID: 106493] [glusterd-rpc-ops.c:691:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: ab091583-d6ee-48be-b0b4-99e1aabd843f [2017-03-16 20:45:40.125420] I [MSGID: 106493] [glusterd-rpc-ops.c:476:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: ab091583-d6ee-48be-b0b4-99e1aabd843f, host: gluster02, port: 0 [2017-03-16 20:45:40.126530] I [MSGID: 106492] [glusterd-handler.c:2784:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: ab091583-d6ee-48be-b0b4-99e1aabd843f [2017-03-16 20:45:40.126583] I [MSGID: 106502] [glusterd-handler.c:2829:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2017-03-16 20:45:40.130607] I [MSGID: 106493] [glusterd-rpc-ops.c:691:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: ab091583-d6ee-48be-b0b4-99e1aabd843f [2017-03-16 20:45:44.915458] W [glusterd-locks.c:675:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x1deac) [0x7f4e11d69eac] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x27a58) [0x7f4e11d73a58] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0xd097a) [0x7f4e11e1c97a] ) 0-management: Lock for vol gluster_shared_storage not held [2017-03-16 20:45:44.915518] W [glusterd-locks.c:675:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x1deac) [0x7f4e11d69eac] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x27a58) [0x7f4e11d73a58] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0xd097a) [0x7f4e11e1c97a] ) 0-management: Lock for vol ovirt_vol03 not held [2017-03-16 20:45:44.915549] W [glusterd-locks.c:675:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x1deac) [0x7f4e11d69eac] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x27a58) [0x7f4e11d73a58] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0xd097a) [0x7f4e11e1c97a] ) 0-management: Lock for vol vmware2 not held [2017-03-16 20:45:44.915562] W [MSGID: 106118] [glusterd-handler.c:5241:__glusterd_peer_rpc_notify] 0-management: Lock not released for vmware2 [2017-03-16 20:45:45.040614] I [MSGID: 106163] [glusterd-handshake.c:1271:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 30800 [2017-03-16 20:45:45.056020] I [MSGID: 106490] [glusterd-handler.c:2608:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: ab091583-d6ee-48be-b0b4-99e1aabd843f [2017-03-16 20:45:55.017631] I [MSGID: 106493] [glusterd-handler.c:3852:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to gluster02 (0), ret: 0, op_ret: 0 [2017-03-16 20:45:55.021400] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: nfs already stopped [2017-03-16 20:45:55.021471] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: nfs service is stopped [2017-03-16 20:45:55.028116] I [MSGID: 106568] [glusterd-proc-mgmt.c:87:glusterd_proc_stop] 0-management: Stopping glustershd daemon running in pid: 20603 [2017-03-16 20:45:56.028328] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: glustershd service is stopped [2017-03-16 20:45:56.028418] I [MSGID: 106567] [glusterd-svc-mgmt.c:196:glusterd_svc_start] 0-management: Starting glustershd service [2017-03-16 20:45:56.034584] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad already stopped [2017-03-16 20:45:56.034656] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: quotad service is stopped [2017-03-16 20:45:56.034873] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2017-03-16 20:45:56.034898] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: bitd service is stopped [2017-03-16 20:45:56.035101] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2017-03-16 20:45:56.035123] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: scrub service is stopped [2017-03-16 20:45:56.035922] I [MSGID: 106492] [glusterd-handler.c:2784:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: ab091583-d6ee-48be-b0b4-99e1aabd843f [2017-03-16 20:45:56.036024] I [MSGID: 106502] [glusterd-handler.c:2829:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2017-03-16 20:45:56.038761] I [MSGID: 106493] [glusterd-rpc-ops.c:691:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: ab091583-d6ee-48be-b0b4-99e1aabd843f [2017-03-16 20:45:57.121716] I [MSGID: 106493] [glusterd-rpc-ops.c:476:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: ab091583-d6ee-48be-b0b4-99e1aabd843f, host: gluster02, port: 0 [2017-03-16 20:45:57.122872] I [MSGID: 106492] [glusterd-handler.c:2784:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: ab091583-d6ee-48be-b0b4-99e1aabd843f [2017-03-16 20:45:57.122931] I [MSGID: 106502] [glusterd-handler.c:2829:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2017-03-16 20:45:57.127204] I [MSGID: 106493] [glusterd-rpc-ops.c:691:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: ab091583-d6ee-48be-b0b4-99e1aabd843f [2017-03-16 20:45:44.915396] I [MSGID: 106004] [glusterd-handler.c:5219:__glusterd_peer_rpc_notify] 0-management: Peer <gluster02> (<ab091583-d6ee-48be-b0b4-99e1aabd843f>), in state <Peer in Cluster>, has disconnected from glusterd. [2017-03-16 20:45:44.915497] W [MSGID: 106118] [glusterd-handler.c:5241:__glusterd_peer_rpc_notify] 0-management: Lock not released for gluster_shared_storage [2017-03-16 20:45:44.915533] W [MSGID: 106118] [glusterd-handler.c:5241:__glusterd_peer_rpc_notify] 0-management: Lock not released for ovirt_vol03 [2017-03-16 20:57:24.149603] W [socket.c:590:__socket_rwv] 0-management: readv on 192.168.209.196:24007 failed (No data available) [2017-03-16 20:57:24.149711] I [MSGID: 106004] [glusterd-handler.c:5219:__glusterd_peer_rpc_notify] 0-management: Peer <gluster03> (<0b388c89-ea8b-4e6a-8649-1d870d2bf3bc>), in state <Peer in Cluster>, has disconnected from glusterd. [2017-03-16 20:57:24.149798] W [glusterd-locks.c:675:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x1deac) [0x7f4e11d69eac] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x27a58) [0x7f4e11d73a58] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0xd097a) [0x7f4e11e1c97a] ) 0-management: Lock for vol gluster_shared_storage not held [2017-03-16 20:57:24.149823] W [MSGID: 106118] [glusterd-handler.c:5241:__glusterd_peer_rpc_notify] 0-management: Lock not released for gluster_shared_storage [2017-03-16 20:57:24.149857] W [glusterd-locks.c:675:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x1deac) [0x7f4e11d69eac] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x27a58) [0x7f4e11d73a58] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0xd097a) [0x7f4e11e1c97a] ) 0-management: Lock for vol ovirt_vol03 not held [2017-03-16 20:57:24.149894] W [MSGID: 106118] [glusterd-handler.c:5241:__glusterd_peer_rpc_notify] 0-management: Lock not released for ovirt_vol03 [2017-03-16 20:57:24.149928] W [glusterd-locks.c:675:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x1deac) [0x7f4e11d69eac] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x27a58) [0x7f4e11d73a58] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0xd097a) [0x7f4e11e1c97a] ) 0-management: Lock for vol vmware2 not held [2017-03-16 20:57:24.149946] W [MSGID: 106118] [glusterd-handler.c:5241:__glusterd_peer_rpc_notify] 0-management: Lock not released for vmware2 [2017-03-16 20:57:26.464431] I [MSGID: 106163] [glusterd-handshake.c:1271:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 30800 [2017-03-16 20:57:26.475228] I [MSGID: 106490] [glusterd-handler.c:2608:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: 0b388c89-ea8b-4e6a-8649-1d870d2bf3bc [2017-03-16 20:57:26.643132] I [MSGID: 106163] [glusterd-handshake.c:1271:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 30800 [2017-03-16 20:57:26.654248] I [MSGID: 106490] [glusterd-handler.c:2608:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: 0b388c89-ea8b-4e6a-8649-1d870d2bf3bc [2017-03-16 20:57:35.093634] I [socket.c:3475:socket_submit_reply] 0-socket.management: not connected (priv->connected = -1) [2017-03-16 20:57:35.093670] E [rpcsvc.c:1325:rpcsvc_submit_generic] 0-rpc-service: failed to submit message (XID: 0x4, Program: GlusterD svc peer, ProgVers: 2, Proc: 2) to rpc-transport (socket.management) [2017-03-16 20:57:35.093692] E [MSGID: 106430] [glusterd-utils.c:470:glusterd_submit_reply] 0-glusterd: Reply submission failed [2017-03-16 20:57:35.093715] I [MSGID: 106493] [glusterd-handler.c:3852:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to gluster03 (0), ret: -1, op_ret: 0 [2017-03-16 20:57:35.093729] E [MSGID: 106376] [glusterd-sm.c:1397:glusterd_friend_sm] 0-glusterd: handler returned: -1 [2017-03-16 20:57:35.094980] I [MSGID: 106493] [glusterd-handler.c:3852:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to gluster03 (0), ret: 0, op_ret: 0 [2017-03-16 20:57:35.099731] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: nfs already stopped [2017-03-16 20:57:35.099793] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: nfs service is stopped [2017-03-16 20:57:35.105527] I [MSGID: 106568] [glusterd-proc-mgmt.c:87:glusterd_proc_stop] 0-management: Stopping glustershd daemon running in pid: 21033 [2017-03-16 20:57:35.135466] W [socket.c:590:__socket_rwv] 0-glustershd: readv on /var/run/gluster/221e0a3b84a49826116ab9161a9a6207.socket failed (No data available) [2017-03-16 20:57:36.105738] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: glustershd service is stopped [2017-03-16 20:57:36.105841] I [MSGID: 106567] [glusterd-svc-mgmt.c:196:glusterd_svc_start] 0-management: Starting glustershd service [2017-03-16 20:57:36.112035] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad already stopped [2017-03-16 20:57:36.112103] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: quotad service is stopped [2017-03-16 20:57:36.112325] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2017-03-16 20:57:36.112356] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: bitd service is stopped [2017-03-16 20:57:36.112569] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2017-03-16 20:57:36.112594] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: scrub service is stopped [2017-03-16 20:57:36.113459] I [MSGID: 106492] [glusterd-handler.c:2784:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 0b388c89-ea8b-4e6a-8649-1d870d2bf3bc [2017-03-16 20:57:36.113539] I [MSGID: 106502] [glusterd-handler.c:2829:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2017-03-16 20:57:38.387527] I [MSGID: 106493] [glusterd-rpc-ops.c:691:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 0b388c89-ea8b-4e6a-8649-1d870d2bf3bc [2017-03-16 20:57:38.472288] I [MSGID: 106493] [glusterd-rpc-ops.c:691:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 0b388c89-ea8b-4e6a-8649-1d870d2bf3bc [2017-03-16 20:57:38.488747] I [MSGID: 106493] [glusterd-rpc-ops.c:476:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: 0b388c89-ea8b-4e6a-8649-1d870d2bf3bc, host: gluster03, port: 0 [2017-03-16 20:57:38.490002] I [MSGID: 106492] [glusterd-handler.c:2784:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 0b388c89-ea8b-4e6a-8649-1d870d2bf3bc [2017-03-16 20:57:38.490056] I [MSGID: 106502] [glusterd-handler.c:2829:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2017-03-16 20:57:38.494263] I [MSGID: 106493] [glusterd-rpc-ops.c:691:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 0b388c89-ea8b-4e6a-8649-1d870d2bf3bc [2017-03-16 20:57:44.013145] W [glusterd-locks.c:675:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x1deac) [0x7f4e11d69eac] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x27a58) [0x7f4e11d73a58] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0xd097a) [0x7f4e11e1c97a] ) 0-management: Lock for vol gluster_shared_storage not held [2017-03-16 20:57:44.013214] W [glusterd-locks.c:675:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x1deac) [0x7f4e11d69eac] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x27a58) [0x7f4e11d73a58] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0xd097a) [0x7f4e11e1c97a] ) 0-management: Lock for vol ovirt_vol03 not held [2017-03-16 20:57:44.013245] W [glusterd-locks.c:675:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x1deac) [0x7f4e11d69eac] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x27a58) [0x7f4e11d73a58] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0xd097a) [0x7f4e11e1c97a] ) 0-management: Lock for vol vmware2 not held [2017-03-16 20:57:44.013298] W [MSGID: 106118] [glusterd-handler.c:5241:__glusterd_peer_rpc_notify] 0-management: Lock not released for vmware2 [2017-03-16 20:57:44.174811] I [MSGID: 106163] [glusterd-handshake.c:1271:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 30800 [2017-03-16 20:57:44.185791] I [MSGID: 106490] [glusterd-handler.c:2608:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: 0b388c89-ea8b-4e6a-8649-1d870d2bf3bc [2017-03-16 20:57:54.101276] I [MSGID: 106493] [glusterd-handler.c:3852:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to gluster03 (0), ret: 0, op_ret: 0 [2017-03-16 20:57:54.104925] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: nfs already stopped [2017-03-16 20:57:54.105008] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: nfs service is stopped [2017-03-16 20:57:54.111593] I [MSGID: 106568] [glusterd-proc-mgmt.c:87:glusterd_proc_stop] 0-management: Stopping glustershd daemon running in pid: 25372 [2017-03-16 20:57:55.111782] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: glustershd service is stopped [2017-03-16 20:57:55.111867] I [MSGID: 106567] [glusterd-svc-mgmt.c:196:glusterd_svc_start] 0-management: Starting glustershd service [2017-03-16 20:57:55.117544] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad already stopped [2017-03-16 20:57:55.117608] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: quotad service is stopped [2017-03-16 20:57:55.117823] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2017-03-16 20:57:55.117860] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: bitd service is stopped [2017-03-16 20:57:55.118064] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2017-03-16 20:57:55.118087] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: scrub service is stopped [2017-03-16 20:57:55.118882] I [MSGID: 106492] [glusterd-handler.c:2784:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 0b388c89-ea8b-4e6a-8649-1d870d2bf3bc [2017-03-16 20:57:55.118990] I [MSGID: 106502] [glusterd-handler.c:2829:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2017-03-16 20:57:55.121409] I [MSGID: 106493] [glusterd-rpc-ops.c:691:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 0b388c89-ea8b-4e6a-8649-1d870d2bf3bc [2017-03-16 20:57:56.211498] I [MSGID: 106493] [glusterd-rpc-ops.c:476:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: 0b388c89-ea8b-4e6a-8649-1d870d2bf3bc, host: gluster03, port: 0 [2017-03-16 20:57:56.212766] I [MSGID: 106492] [glusterd-handler.c:2784:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 0b388c89-ea8b-4e6a-8649-1d870d2bf3bc [2017-03-16 20:57:56.212815] I [MSGID: 106502] [glusterd-handler.c:2829:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2017-03-16 20:57:56.216946] I [MSGID: 106493] [glusterd-rpc-ops.c:691:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 0b388c89-ea8b-4e6a-8649-1d870d2bf3bc [2017-03-16 20:57:44.013075] I [MSGID: 106004] [glusterd-handler.c:5219:__glusterd_peer_rpc_notify] 0-management: Peer <gluster03> (<0b388c89-ea8b-4e6a-8649-1d870d2bf3bc>), in state <Peer in Cluster>, has disconnected from glusterd. [2017-03-16 20:57:44.013191] W [MSGID: 106118] [glusterd-handler.c:5241:__glusterd_peer_rpc_notify] 0-management: Lock not released for gluster_shared_storage [2017-03-16 20:57:44.013228] W [MSGID: 106118] [glusterd-handler.c:5241:__glusterd_peer_rpc_notify] 0-management: Lock not released for ovirt_vol03 [2017-03-16 21:04:15.883997] W [socket.c:590:__socket_rwv] 0-management: readv on 192.168.209.197:24007 failed (No data available) [2017-03-16 21:04:15.884105] I [MSGID: 106004] [glusterd-handler.c:5219:__glusterd_peer_rpc_notify] 0-management: Peer <gluster04> (<3e37013d-4750-403e-bf02-305e34546d58>), in state <Peer in Cluster>, has disconnected from glusterd. [2017-03-16 21:04:15.884188] W [glusterd-locks.c:675:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x1deac) [0x7f4e11d69eac] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x27a58) [0x7f4e11d73a58] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0xd097a) [0x7f4e11e1c97a] ) 0-management: Lock for vol gluster_shared_storage not held [2017-03-16 21:04:15.884209] W [MSGID: 106118] [glusterd-handler.c:5241:__glusterd_peer_rpc_notify] 0-management: Lock not released for gluster_shared_storage [2017-03-16 21:04:15.884238] W [glusterd-locks.c:675:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x1deac) [0x7f4e11d69eac] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x27a58) [0x7f4e11d73a58] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0xd097a) [0x7f4e11e1c97a] ) 0-management: Lock for vol ovirt_vol03 not held [2017-03-16 21:04:15.884262] W [MSGID: 106118] [glusterd-handler.c:5241:__glusterd_peer_rpc_notify] 0-management: Lock not released for ovirt_vol03 [2017-03-16 21:04:15.884300] W [glusterd-locks.c:675:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x1deac) [0x7f4e11d69eac] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x27a58) [0x7f4e11d73a58] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0xd097a) [0x7f4e11e1c97a] ) 0-management: Lock for vol vmware2 not held [2017-03-16 21:04:15.884314] W [MSGID: 106118] [glusterd-handler.c:5241:__glusterd_peer_rpc_notify] 0-management: Lock not released for vmware2 [2017-03-16 21:04:18.218863] I [MSGID: 106163] [glusterd-handshake.c:1271:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 30800 [2017-03-16 21:04:18.398032] I [MSGID: 106163] [glusterd-handshake.c:1271:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 30800 [2017-03-16 21:04:18.407798] I [MSGID: 106490] [glusterd-handler.c:2608:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: 3e37013d-4750-403e-bf02-305e34546d58 [2017-03-16 21:04:26.146777] I [MSGID: 106493] [glusterd-handler.c:3852:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to gluster04 (0), ret: 0, op_ret: 0 [2017-03-16 21:04:26.150755] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: nfs already stopped [2017-03-16 21:04:26.150808] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: nfs service is stopped [2017-03-16 21:04:26.155573] I [MSGID: 106568] [glusterd-proc-mgmt.c:87:glusterd_proc_stop] 0-management: Stopping glustershd daemon running in pid: 25583 [2017-03-16 21:04:27.155736] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: glustershd service is stopped [2017-03-16 21:04:27.155834] I [MSGID: 106567] [glusterd-svc-mgmt.c:196:glusterd_svc_start] 0-management: Starting glustershd service [2017-03-16 21:04:27.162282] W [socket.c:3075:socket_connect] 0-glustershd: Ignore failed connection attempt on /var/run/gluster/221e0a3b84a49826116ab9161a9a6207.socket, (No such file or directory) [2017-03-16 21:04:27.162642] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad already stopped [2017-03-16 21:04:27.162691] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: quotad service is stopped [2017-03-16 21:04:27.162914] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2017-03-16 21:04:27.162939] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: bitd service is stopped [2017-03-16 21:04:27.163150] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2017-03-16 21:04:27.163173] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: scrub service is stopped [2017-03-16 21:04:27.164179] I [MSGID: 106492] [glusterd-handler.c:2784:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 3e37013d-4750-403e-bf02-305e34546d58 [2017-03-16 21:04:27.164288] I [MSGID: 106502] [glusterd-handler.c:2829:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2017-03-16 21:04:28.161509] I [MSGID: 106493] [glusterd-rpc-ops.c:691:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 3e37013d-4750-403e-bf02-305e34546d58 [2017-03-16 21:04:28.182622] I [MSGID: 106493] [glusterd-rpc-ops.c:476:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: 3e37013d-4750-403e-bf02-305e34546d58, host: gluster04, port: 0 [2017-03-16 21:04:28.183831] I [MSGID: 106492] [glusterd-handler.c:2784:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 3e37013d-4750-403e-bf02-305e34546d58 [2017-03-16 21:04:28.183902] I [MSGID: 106502] [glusterd-handler.c:2829:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2017-03-16 21:04:28.188098] I [MSGID: 106493] [glusterd-rpc-ops.c:691:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 3e37013d-4750-403e-bf02-305e34546d58 [2017-03-16 21:04:34.790330] W [glusterd-locks.c:675:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x1deac) [0x7f4e11d69eac] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x27a58) [0x7f4e11d73a58] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0xd097a) [0x7f4e11e1c97a] ) 0-management: Lock for vol gluster_shared_storage not held [2017-03-16 21:04:34.790398] W [glusterd-locks.c:675:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x1deac) [0x7f4e11d69eac] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x27a58) [0x7f4e11d73a58] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0xd097a) [0x7f4e11e1c97a] ) 0-management: Lock for vol ovirt_vol03 not held [2017-03-16 21:04:34.790447] W [glusterd-locks.c:675:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x1deac) [0x7f4e11d69eac] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x27a58) [0x7f4e11d73a58] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0xd097a) [0x7f4e11e1c97a] ) 0-management: Lock for vol vmware2 not held [2017-03-16 21:04:34.790461] W [MSGID: 106118] [glusterd-handler.c:5241:__glusterd_peer_rpc_notify] 0-management: Lock not released for vmware2 [2017-03-16 21:04:35.029157] I [MSGID: 106163] [glusterd-handshake.c:1271:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 30800 [2017-03-16 21:04:35.147670] I [MSGID: 106490] [glusterd-handler.c:2608:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: 3e37013d-4750-403e-bf02-305e34546d58 [2017-03-16 21:04:45.153640] I [MSGID: 106493] [glusterd-handler.c:3852:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to gluster04 (0), ret: 0, op_ret: 0 [2017-03-16 21:04:45.157226] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: nfs already stopped [2017-03-16 21:04:45.157318] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: nfs service is stopped [2017-03-16 21:04:45.164082] I [MSGID: 106568] [glusterd-proc-mgmt.c:87:glusterd_proc_stop] 0-management: Stopping glustershd daemon running in pid: 28071 [2017-03-16 21:04:46.164291] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: glustershd service is stopped [2017-03-16 21:04:46.164367] I [MSGID: 106567] [glusterd-svc-mgmt.c:196:glusterd_svc_start] 0-management: Starting glustershd service [2017-03-16 21:04:46.170995] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad already stopped [2017-03-16 21:04:46.171078] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: quotad service is stopped [2017-03-16 21:04:46.171294] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2017-03-16 21:04:46.171323] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: bitd service is stopped [2017-03-16 21:04:46.171512] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2017-03-16 21:04:46.171533] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: scrub service is stopped [2017-03-16 21:04:46.172511] I [MSGID: 106492] [glusterd-handler.c:2784:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 3e37013d-4750-403e-bf02-305e34546d58 [2017-03-16 21:04:46.172601] I [MSGID: 106502] [glusterd-handler.c:2829:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2017-03-16 21:04:47.169585] I [MSGID: 106493] [glusterd-rpc-ops.c:691:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 3e37013d-4750-403e-bf02-305e34546d58 [2017-03-16 21:04:47.186540] I [MSGID: 106493] [glusterd-rpc-ops.c:476:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: 3e37013d-4750-403e-bf02-305e34546d58, host: gluster04, port: 0 [2017-03-16 21:04:47.188556] I [MSGID: 106492] [glusterd-handler.c:2784:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 3e37013d-4750-403e-bf02-305e34546d58 [2017-03-16 21:04:47.188596] I [MSGID: 106502] [glusterd-handler.c:2829:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2017-03-16 21:04:47.194752] I [MSGID: 106493] [glusterd-rpc-ops.c:691:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 3e37013d-4750-403e-bf02-305e34546d58 [2017-03-16 21:04:34.790242] I [MSGID: 106004] [glusterd-handler.c:5219:__glusterd_peer_rpc_notify] 0-management: Peer <gluster04> (<3e37013d-4750-403e-bf02-305e34546d58>), in state <Peer in Cluster>, has disconnected from glusterd. [2017-03-16 21:04:34.790373] W [MSGID: 106118] [glusterd-handler.c:5241:__glusterd_peer_rpc_notify] 0-management: Lock not released for gluster_shared_storage [2017-03-16 21:04:34.790429] W [MSGID: 106118] [glusterd-handler.c:5241:__glusterd_peer_rpc_notify] 0-management: Lock not released for ovirt_vol03 [2017-03-17 07:59:55.339178] I [MSGID: 106488] [glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 0-management: Received get vol req [2017-03-17 08:00:19.325921] I [MSGID: 106488] [glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 0-management: Received get vol req [2017-03-17 08:03:52.656453] I [MSGID: 106488] [glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 0-management: Received get vol req The message "I [MSGID: 106488] [glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 0-management: Received get vol req" repeated 3 times between [2017-03-17 08:03:52.656453] and [2017-03-17 08:03:52.661842] [2017-03-17 08:04:09.844765] I [MSGID: 106488] [glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 0-management: Received get vol req The message "I [MSGID: 106488] [glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 0-management: Received get vol req" repeated 7 times between [2017-03-17 08:04:09.844765] and [2017-03-17 08:04:29.602212] [2017-03-17 08:07:08.433593] I [MSGID: 106488] [glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 0-management: Received get vol req [2017-03-17 10:22:56.569902] I [MSGID: 106004] [glusterd-handler.c:5219:__glusterd_peer_rpc_notify] 0-management: Peer <gluster04> (<3e37013d-4750-403e-bf02-305e34546d58>), in state <Peer in Cluster>, has disconnected from glusterd. [2017-03-17 10:22:56.570070] W [glusterd-locks.c:675:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x1deac) [0x7f4e11d69eac] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x27a58) [0x7f4e11d73a58] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0xd097a) [0x7f4e11e1c97a] ) 0-management: Lock for vol gluster_shared_storage not held [2017-03-17 10:22:56.570091] W [MSGID: 106118] [glusterd-handler.c:5241:__glusterd_peer_rpc_notify] 0-management: Lock not released for gluster_shared_storage [2017-03-17 10:22:56.570138] W [glusterd-locks.c:675:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x1deac) [0x7f4e11d69eac] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x27a58) [0x7f4e11d73a58] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0xd097a) [0x7f4e11e1c97a] ) 0-management: Lock for vol ovirt_vol03 not held [2017-03-17 10:22:56.570152] W [MSGID: 106118] [glusterd-handler.c:5241:__glusterd_peer_rpc_notify] 0-management: Lock not released for ovirt_vol03 [2017-03-17 10:22:56.570189] W [glusterd-locks.c:675:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x1deac) [0x7f4e11d69eac] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x27a58) [0x7f4e11d73a58] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0xd097a) [0x7f4e11e1c97a] ) 0-management: Lock for vol vmware2 not held [2017-03-17 10:22:56.570202] W [MSGID: 106118] [glusterd-handler.c:5241:__glusterd_peer_rpc_notify] 0-management: Lock not released for vmware2 [2017-03-17 10:23:07.985463] E [socket.c:2309:socket_connect_finish] 0-management: connection to 192.168.209.197:24007 failed (Connection refused) [2017-03-17 10:27:03.380020] I [MSGID: 106163] [glusterd-handshake.c:1271:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 30800 [2017-03-17 10:27:03.391280] I [MSGID: 106490] [glusterd-handler.c:2608:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: 3e37013d-4750-403e-bf02-305e34546d58 [2017-03-17 10:27:04.289389] I [MSGID: 106493] [glusterd-handler.c:3852:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to gluster04 (0), ret: 0, op_ret: 0 [2017-03-17 10:27:04.292704] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: nfs already stopped [2017-03-17 10:27:04.292787] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: nfs service is stopped [2017-03-17 10:27:04.299374] I [MSGID: 106568] [glusterd-proc-mgmt.c:87:glusterd_proc_stop] 0-management: Stopping glustershd daemon running in pid: 28341 [2017-03-17 10:27:05.299579] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: glustershd service is stopped [2017-03-17 10:27:05.299690] I [MSGID: 106567] [glusterd-svc-mgmt.c:196:glusterd_svc_start] 0-management: Starting glustershd service [2017-03-17 10:27:05.308157] W [socket.c:3075:socket_connect] 0-glustershd: Ignore failed connection attempt on /var/run/gluster/221e0a3b84a49826116ab9161a9a6207.socket, (No such file or directory) [2017-03-17 10:27:05.308534] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad already stopped [2017-03-17 10:27:05.308619] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: quotad service is stopped [2017-03-17 10:27:05.308877] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2017-03-17 10:27:05.308913] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: bitd service is stopped [2017-03-17 10:27:05.309143] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2017-03-17 10:27:05.309175] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: scrub service is stopped [2017-03-17 10:27:05.310149] I [MSGID: 106492] [glusterd-handler.c:2784:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 3e37013d-4750-403e-bf02-305e34546d58 [2017-03-17 10:27:05.310232] I [MSGID: 106502] [glusterd-handler.c:2829:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2017-03-17 10:27:06.291757] I [MSGID: 106493] [glusterd-rpc-ops.c:691:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 3e37013d-4750-403e-bf02-305e34546d58 [2017-03-17 10:27:06.304558] I [MSGID: 106493] [glusterd-rpc-ops.c:476:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: 3e37013d-4750-403e-bf02-305e34546d58, host: gluster04, port: 0 [2017-03-17 10:27:06.305659] I [MSGID: 106492] [glusterd-handler.c:2784:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 3e37013d-4750-403e-bf02-305e34546d58 [2017-03-17 10:27:06.305715] I [MSGID: 106502] [glusterd-handler.c:2829:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2017-03-17 10:27:06.309874] I [MSGID: 106493] [glusterd-rpc-ops.c:691:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 3e37013d-4750-403e-bf02-305e34546d58 [2017-03-17 10:30:12.481985] I [MSGID: 106004] [glusterd-handler.c:5219:__glusterd_peer_rpc_notify] 0-management: Peer <gluster03> (<0b388c89-ea8b-4e6a-8649-1d870d2bf3bc>), in state <Peer in Cluster>, has disconnected from glusterd. [2017-03-17 10:30:12.482105] W [glusterd-locks.c:675:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x1deac) [0x7f4e11d69eac] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x27a58) [0x7f4e11d73a58] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0xd097a) [0x7f4e11e1c97a] ) 0-management: Lock for vol gluster_shared_storage not held [2017-03-17 10:30:12.482124] W [MSGID: 106118] [glusterd-handler.c:5241:__glusterd_peer_rpc_notify] 0-management: Lock not released for gluster_shared_storage [2017-03-17 10:30:12.482156] W [glusterd-locks.c:675:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x1deac) [0x7f4e11d69eac] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x27a58) [0x7f4e11d73a58] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0xd097a) [0x7f4e11e1c97a] ) 0-management: Lock for vol ovirt_vol03 not held [2017-03-17 10:30:12.482170] W [MSGID: 106118] [glusterd-handler.c:5241:__glusterd_peer_rpc_notify] 0-management: Lock not released for ovirt_vol03 [2017-03-17 10:30:12.482199] W [glusterd-locks.c:675:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x1deac) [0x7f4e11d69eac] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x27a58) [0x7f4e11d73a58] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0xd097a) [0x7f4e11e1c97a] ) 0-management: Lock for vol vmware2 not held [2017-03-17 10:30:12.482225] W [MSGID: 106118] [glusterd-handler.c:5241:__glusterd_peer_rpc_notify] 0-management: Lock not released for vmware2 [2017-03-17 10:30:24.313465] E [socket.c:2309:socket_connect_finish] 0-management: connection to 192.168.209.196:24007 failed (Connection refused) [2017-03-17 11:04:20.858625] I [MSGID: 106163] [glusterd-handshake.c:1271:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 30800 [2017-03-17 11:04:20.870200] I [MSGID: 106490] [glusterd-handler.c:2608:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: 0b388c89-ea8b-4e6a-8649-1d870d2bf3bc [2017-03-17 11:04:23.723573] I [MSGID: 106493] [glusterd-handler.c:3852:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to gluster03 (0), ret: 0, op_ret: 0 [2017-03-17 11:04:23.726291] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: nfs already stopped [2017-03-17 11:04:23.726384] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: nfs service is stopped [2017-03-17 11:04:23.733315] I [MSGID: 106568] [glusterd-proc-mgmt.c:87:glusterd_proc_stop] 0-management: Stopping glustershd daemon running in pid: 36410 [2017-03-17 11:04:24.733515] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: glustershd service is stopped [2017-03-17 11:04:24.733598] I [MSGID: 106567] [glusterd-svc-mgmt.c:196:glusterd_svc_start] 0-management: Starting glustershd service [2017-03-17 11:04:24.742033] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad already stopped [2017-03-17 11:04:24.742126] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: quotad service is stopped [2017-03-17 11:04:24.742377] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2017-03-17 11:04:24.742414] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: bitd service is stopped [2017-03-17 11:04:24.742619] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2017-03-17 11:04:24.742672] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: scrub service is stopped [2017-03-17 11:04:24.743573] I [MSGID: 106492] [glusterd-handler.c:2784:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 0b388c89-ea8b-4e6a-8649-1d870d2bf3bc [2017-03-17 11:04:24.743649] I [MSGID: 106502] [glusterd-handler.c:2829:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2017-03-17 11:04:25.572466] I [MSGID: 106493] [glusterd-rpc-ops.c:691:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 0b388c89-ea8b-4e6a-8649-1d870d2bf3bc [2017-03-17 11:04:25.591738] I [MSGID: 106493] [glusterd-rpc-ops.c:476:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: 0b388c89-ea8b-4e6a-8649-1d870d2bf3bc, host: gluster03, port: 0 [2017-03-17 11:04:25.596190] I [MSGID: 106492] [glusterd-handler.c:2784:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 0b388c89-ea8b-4e6a-8649-1d870d2bf3bc [2017-03-17 11:04:25.596263] I [MSGID: 106502] [glusterd-handler.c:2829:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2017-03-17 11:04:25.601309] I [MSGID: 106493] [glusterd-rpc-ops.c:691:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 0b388c89-ea8b-4e6a-8649-1d870d2bf3bc [2017-03-17 11:08:39.427697] I [MSGID: 106004] [glusterd-handler.c:5219:__glusterd_peer_rpc_notify] 0-management: Peer <gluster02> (<ab091583-d6ee-48be-b0b4-99e1aabd843f>), in state <Peer in Cluster>, has disconnected from glusterd. [2017-03-17 11:08:39.427819] W [glusterd-locks.c:675:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x1deac) [0x7f4e11d69eac] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x27a58) [0x7f4e11d73a58] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0xd097a) [0x7f4e11e1c97a] ) 0-management: Lock for vol gluster_shared_storage not held [2017-03-17 11:08:39.427866] W [MSGID: 106118] [glusterd-handler.c:5241:__glusterd_peer_rpc_notify] 0-management: Lock not released for gluster_shared_storage [2017-03-17 11:08:39.427900] W [glusterd-locks.c:675:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x1deac) [0x7f4e11d69eac] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x27a58) [0x7f4e11d73a58] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0xd097a) [0x7f4e11e1c97a] ) 0-management: Lock for vol ovirt_vol03 not held [2017-03-17 11:08:39.427914] W [MSGID: 106118] [glusterd-handler.c:5241:__glusterd_peer_rpc_notify] 0-management: Lock not released for ovirt_vol03 [2017-03-17 11:08:39.427943] W [glusterd-locks.c:675:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x1deac) [0x7f4e11d69eac] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x27a58) [0x7f4e11d73a58] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0xd097a) [0x7f4e11e1c97a] ) 0-management: Lock for vol vmware2 not held [2017-03-17 11:08:39.427956] W [MSGID: 106118] [glusterd-handler.c:5241:__glusterd_peer_rpc_notify] 0-management: Lock not released for vmware2 [2017-03-17 11:08:50.753496] E [socket.c:2309:socket_connect_finish] 0-management: connection to 192.168.209.195:24007 failed (Connection refused) [2017-03-17 11:13:06.034625] I [MSGID: 106163] [glusterd-handshake.c:1271:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 30800 [2017-03-17 11:13:06.046553] I [MSGID: 106490] [glusterd-handler.c:2608:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: ab091583-d6ee-48be-b0b4-99e1aabd843f [2017-03-17 11:13:07.994945] I [MSGID: 106493] [glusterd-handler.c:3852:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to gluster02 (0), ret: 0, op_ret: 0 [2017-03-17 11:13:07.998799] I [MSGID: 106492] [glusterd-handler.c:2784:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: ab091583-d6ee-48be-b0b4-99e1aabd843f [2017-03-17 11:13:07.998858] I [MSGID: 106502] [glusterd-handler.c:2829:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2017-03-17 11:13:08.000793] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: nfs already stopped [2017-03-17 11:13:08.000859] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: nfs service is stopped [2017-03-17 11:13:08.007929] I [MSGID: 106568] [glusterd-proc-mgmt.c:87:glusterd_proc_stop] 0-management: Stopping glustershd daemon running in pid: 1638 [2017-03-17 11:13:09.008151] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: glustershd service is stopped [2017-03-17 11:13:09.008224] I [MSGID: 106567] [glusterd-svc-mgmt.c:196:glusterd_svc_start] 0-management: Starting glustershd service [2017-03-17 11:13:09.016422] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad already stopped [2017-03-17 11:13:09.016489] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: quotad service is stopped [2017-03-17 11:13:09.016685] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2017-03-17 11:13:09.016709] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: bitd service is stopped [2017-03-17 11:13:09.016930] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2017-03-17 11:13:09.016954] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: scrub service is stopped [2017-03-17 11:13:09.017984] I [MSGID: 106493] [glusterd-rpc-ops.c:691:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: ab091583-d6ee-48be-b0b4-99e1aabd843f [2017-03-17 11:13:09.028296] I [MSGID: 106493] [glusterd-rpc-ops.c:476:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: ab091583-d6ee-48be-b0b4-99e1aabd843f, host: gluster02, port: 0 [2017-03-17 11:13:09.029656] I [MSGID: 106492] [glusterd-handler.c:2784:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: ab091583-d6ee-48be-b0b4-99e1aabd843f [2017-03-17 11:13:09.029689] I [MSGID: 106502] [glusterd-handler.c:2829:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2017-03-17 11:13:09.032509] I [MSGID: 106493] [glusterd-rpc-ops.c:691:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: ab091583-d6ee-48be-b0b4-99e1aabd843f [2017-03-17 11:16:28.514914] W [glusterfsd.c:1327:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7dc5) [0x7f4e1c450dc5] -->/usr/sbin/glusterd(glusterfs_sigwaiter+0xe5) [0x7f4e1dae2cd5] -->/usr/sbin/glusterd(cleanup_and_exit+0x6b) [0x7f4e1dae2b4b] ) 0-: received signum (15), shutting down [2017-03-17 11:19:58.582677] I [MSGID: 100030] [glusterfsd.c:2454:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.8.10 (args: /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO) [2017-03-17 11:19:58.593870] I [MSGID: 106478] [glusterd.c:1379:init] 0-management: Maximum allowed open file descriptors set to 65536 [2017-03-17 11:19:58.593931] I [MSGID: 106479] [glusterd.c:1428:init] 0-management: Using /var/lib/glusterd as working directory [2017-03-17 11:19:58.602391] E [rpc-transport.c:287:rpc_transport_load] 0-rpc-transport: /usr/lib64/glusterfs/3.8.10/rpc-transport/rdma.so: cannot open shared object file: No such file or directory [2017-03-17 11:19:58.602421] W [rpc-transport.c:291:rpc_transport_load] 0-rpc-transport: volume 'rdma.management': transport-type 'rdma' is not valid or not found on this machine [2017-03-17 11:19:58.602433] W [rpcsvc.c:1638:rpcsvc_create_listener] 0-rpc-service: cannot create listener, initing the transport failed [2017-03-17 11:19:58.602444] E [MSGID: 106243] [glusterd.c:1652:init] 0-management: creation of 1 listeners failed, continuing with succeeded transport [2017-03-17 11:19:58.605789] I [MSGID: 106228] [glusterd.c:429:glusterd_check_gsync_present] 0-glusterd: geo-replication module not installed in the system [No such file or directory] [2017-03-17 11:19:58.608534] I [MSGID: 106513] [glusterd-store.c:2098:glusterd_restore_op_version] 0-glusterd: retrieved op-version: 30800 [2017-03-17 11:19:58.680262] I [MSGID: 106498] [glusterd-handler.c:3649:glusterd_friend_add_from_peerinfo] 0-management: connect returned 0 [2017-03-17 11:19:58.680699] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-17 11:19:58.684337] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-17 11:19:58.687470] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 The message "I [MSGID: 106498] [glusterd-handler.c:3649:glusterd_friend_add_from_peerinfo] 0-management: connect returned 0" repeated 2 times between [2017-03-17 11:19:58.680262] and [2017-03-17 11:19:58.680636] [2017-03-17 11:19:58.692446] I [MSGID: 106544] [glusterd.c:155:glusterd_uuid_init] 0-management: retrieved UUID: f15808d9-ab37-4126-b4e4-1d14011e4e0f Final graph: +------------------------------------------------------------------------------+ 1: volume management 2: type mgmt/glusterd 3: option rpc-auth.auth-glusterfs on 4: option rpc-auth.auth-unix on 5: option rpc-auth.auth-null on 6: option rpc-auth-allow-insecure on 7: option transport.socket.listen-backlog 128 8: option event-threads 1 9: option ping-timeout 0 10: option transport.socket.read-fail-log off 11: option transport.socket.keepalive-interval 2 12: option transport.socket.keepalive-time 10 13: option transport-type rdma 14: option working-directory /var/lib/glusterd 15: end-volume 16: +------------------------------------------------------------------------------+ [2017-03-17 11:19:58.732546] I [MSGID: 101190] [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1 [2017-03-17 11:20:00.446572] I [MSGID: 106493] [glusterd-rpc-ops.c:476:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: 3e37013d-4750-403e-bf02-305e34546d58, host: gluster04, port: 0 [2017-03-17 11:20:00.447832] C [MSGID: 106003] [glusterd-server-quorum.c:341:glusterd_do_volume_quorum_action] 0-management: Server quorum regained for volume ovirt_vol03. Starting local bricks. [2017-03-17 11:20:00.451312] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-17 11:20:00.454339] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-17 11:20:00.457720] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-17 11:20:00.460544] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-17 11:20:00.463429] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-17 11:20:00.466930] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-17 11:20:00.469971] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-17 11:20:00.472950] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-17 11:20:00.473194] C [MSGID: 106003] [glusterd-server-quorum.c:341:glusterd_do_volume_quorum_action] 0-management: Server quorum regained for volume vmware2. Starting local bricks. [2017-03-17 11:20:00.476372] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-17 11:20:00.479275] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-17 11:20:00.482331] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-17 11:20:00.485474] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-17 11:20:00.488465] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-17 11:20:00.491268] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-17 11:20:00.494206] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-17 11:20:00.497061] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-17 11:20:00.500138] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-17 11:20:00.502931] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-17 11:20:00.503331] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-nfs: setting frame-timeout to 600 [2017-03-17 11:20:00.503505] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: nfs already stopped [2017-03-17 11:20:00.503546] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: nfs service is stopped [2017-03-17 11:20:00.504095] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-glustershd: setting frame-timeout to 600 [2017-03-17 11:20:00.507170] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: glustershd already stopped [2017-03-17 11:20:00.507193] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: glustershd service is stopped [2017-03-17 11:20:00.507211] I [MSGID: 106567] [glusterd-svc-mgmt.c:196:glusterd_svc_start] 0-management: Starting glustershd service [2017-03-17 11:20:00.509596] W [socket.c:3075:socket_connect] 0-glustershd: Ignore failed connection attempt on , (No such file or directory) [2017-03-17 11:20:00.509743] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-quotad: setting frame-timeout to 600 [2017-03-17 11:20:00.509966] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad already stopped [2017-03-17 11:20:00.509996] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: quotad service is stopped [2017-03-17 11:20:00.510054] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-bitd: setting frame-timeout to 600 [2017-03-17 11:20:00.510302] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2017-03-17 11:20:00.510323] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: bitd service is stopped [2017-03-17 11:20:00.510365] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-scrub: setting frame-timeout to 600 [2017-03-17 11:20:00.510579] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2017-03-17 11:20:00.510591] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: scrub service is stopped [2017-03-17 11:20:00.513180] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-17 11:20:00.513766] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-snapd: setting frame-timeout to 600 [2017-03-17 11:20:00.513896] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-snapd: setting frame-timeout to 600 [2017-03-17 11:20:00.514010] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-snapd: setting frame-timeout to 600 [2017-03-17 11:20:00.514183] I [MSGID: 106492] [glusterd-handler.c:2784:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 3e37013d-4750-403e-bf02-305e34546d58 [2017-03-17 11:20:00.514271] I [MSGID: 106502] [glusterd-handler.c:2829:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2017-03-17 11:20:00.518415] I [socket.c:2403:socket_event_handler] 0-transport: disconnecting now [2017-03-17 11:20:00.520323] I [MSGID: 106005] [glusterd-handler.c:5055:__glusterd_brick_rpc_notify] 0-management: Brick gluster01:/mnt/ovirt_disk1/ovirt_vol03 has disconnected from glusterd. [2017-03-17 11:20:00.522206] I [socket.c:2403:socket_event_handler] 0-transport: disconnecting now [2017-03-17 11:20:00.524084] I [MSGID: 106005] [glusterd-handler.c:5055:__glusterd_brick_rpc_notify] 0-management: Brick gluster01:/mnt/ovirt_disk2/ovirt_vol03 has disconnected from glusterd. [2017-03-17 11:20:00.526075] I [socket.c:2403:socket_event_handler] 0-transport: disconnecting now [2017-03-17 11:20:00.528027] I [MSGID: 106005] [glusterd-handler.c:5055:__glusterd_brick_rpc_notify] 0-management: Brick gluster01:/mnt/ovirt_disk3/ovirt_vol03 has disconnected from glusterd. [2017-03-17 11:20:00.529873] I [socket.c:2403:socket_event_handler] 0-transport: disconnecting now [2017-03-17 11:20:00.531706] I [MSGID: 106005] [glusterd-handler.c:5055:__glusterd_brick_rpc_notify] 0-management: Brick gluster01:/mnt/ovirt_disk4/ovirt_vol03 has disconnected from glusterd. [2017-03-17 11:20:00.533576] I [socket.c:2403:socket_event_handler] 0-transport: disconnecting now [2017-03-17 11:20:00.535486] I [MSGID: 106005] [glusterd-handler.c:5055:__glusterd_brick_rpc_notify] 0-management: Brick gluster01:/mnt/ovirt_disk5/ovirt_vol03 has disconnected from glusterd. [2017-03-17 11:20:00.537345] I [socket.c:2403:socket_event_handler] 0-transport: disconnecting now [2017-03-17 11:20:00.539138] I [MSGID: 106005] [glusterd-handler.c:5055:__glusterd_brick_rpc_notify] 0-management: Brick gluster01:/mnt/ovirt_disk6/ovirt_vol03 has disconnected from glusterd. [2017-03-17 11:20:00.540956] I [socket.c:2403:socket_event_handler] 0-transport: disconnecting now [2017-03-17 11:20:00.542769] I [MSGID: 106005] [glusterd-handler.c:5055:__glusterd_brick_rpc_notify] 0-management: Brick gluster01:/mnt/ovirt_disk7/ovirt_vol03 has disconnected from glusterd. [2017-03-17 11:20:00.544629] I [socket.c:2403:socket_event_handler] 0-transport: disconnecting now [2017-03-17 11:20:00.546537] I [MSGID: 106005] [glusterd-handler.c:5055:__glusterd_brick_rpc_notify] 0-management: Brick gluster01:/mnt/ovirt_disk8/ovirt_vol03 has disconnected from glusterd. [2017-03-17 11:20:00.548457] I [socket.c:2403:socket_event_handler] 0-transport: disconnecting now [2017-03-17 11:20:00.550261] I [MSGID: 106005] [glusterd-handler.c:5055:__glusterd_brick_rpc_notify] 0-management: Brick gluster01:/mnt/disk1/vmware2 has disconnected from glusterd. [2017-03-17 11:20:00.552080] I [socket.c:2403:socket_event_handler] 0-transport: disconnecting now [2017-03-17 11:20:00.553848] I [MSGID: 106005] [glusterd-handler.c:5055:__glusterd_brick_rpc_notify] 0-management: Brick gluster01:/mnt/disk2/vmware2 has disconnected from glusterd. [2017-03-17 11:20:00.555648] I [socket.c:2403:socket_event_handler] 0-transport: disconnecting now [2017-03-17 11:20:00.557434] I [MSGID: 106005] [glusterd-handler.c:5055:__glusterd_brick_rpc_notify] 0-management: Brick gluster01:/mnt/disk3/vmware2 has disconnected from glusterd. [2017-03-17 11:20:00.559223] I [socket.c:2403:socket_event_handler] 0-transport: disconnecting now [2017-03-17 11:20:00.561012] I [MSGID: 106005] [glusterd-handler.c:5055:__glusterd_brick_rpc_notify] 0-management: Brick gluster01:/mnt/disk4/vmware2 has disconnected from glusterd. [2017-03-17 11:20:00.562881] I [socket.c:2403:socket_event_handler] 0-transport: disconnecting now [2017-03-17 11:20:00.564760] I [MSGID: 106005] [glusterd-handler.c:5055:__glusterd_brick_rpc_notify] 0-management: Brick gluster01:/mnt/disk5/vmware2 has disconnected from glusterd. [2017-03-17 11:20:00.566561] I [socket.c:2403:socket_event_handler] 0-transport: disconnecting now [2017-03-17 11:20:00.568432] I [MSGID: 106005] [glusterd-handler.c:5055:__glusterd_brick_rpc_notify] 0-management: Brick gluster01:/mnt/disk6/vmware2 has disconnected from glusterd. [2017-03-17 11:20:00.570205] I [socket.c:2403:socket_event_handler] 0-transport: disconnecting now [2017-03-17 11:20:00.571953] I [MSGID: 106005] [glusterd-handler.c:5055:__glusterd_brick_rpc_notify] 0-management: Brick gluster01:/mnt/disk7/vmware2 has disconnected from glusterd. [2017-03-17 11:20:00.573731] I [socket.c:2403:socket_event_handler] 0-transport: disconnecting now [2017-03-17 11:20:00.575579] I [MSGID: 106005] [glusterd-handler.c:5055:__glusterd_brick_rpc_notify] 0-management: Brick gluster01:/mnt/disk8/vmware2 has disconnected from glusterd. [2017-03-17 11:20:00.577345] I [socket.c:2403:socket_event_handler] 0-transport: disconnecting now [2017-03-17 11:20:00.579200] I [MSGID: 106005] [glusterd-handler.c:5055:__glusterd_brick_rpc_notify] 0-management: Brick gluster01:/mnt/disk9/vmware2 has disconnected from glusterd. [2017-03-17 11:20:00.580935] I [socket.c:2403:socket_event_handler] 0-transport: disconnecting now [2017-03-17 11:20:00.582684] I [MSGID: 106005] [glusterd-handler.c:5055:__glusterd_brick_rpc_notify] 0-management: Brick gluster01:/mnt/disk10/vmware2 has disconnected from glusterd. [2017-03-17 11:20:00.582729] I [MSGID: 106493] [glusterd-rpc-ops.c:691:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 3e37013d-4750-403e-bf02-305e34546d58 [2017-03-17 11:20:00.584575] I [socket.c:2403:socket_event_handler] 0-transport: disconnecting now [2017-03-17 11:20:00.586269] I [MSGID: 106005] [glusterd-handler.c:5055:__glusterd_brick_rpc_notify] 0-management: Brick 192.168.209.194:/var/lib/glusterd/ss_brick has disconnected from glusterd. [2017-03-17 11:20:00.619048] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/ovirt_disk3/ovirt_vol03 on port 49506 [2017-03-17 11:20:00.619157] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/ovirt_disk8/ovirt_vol03 on port 49511 [2017-03-17 11:20:00.621294] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/ovirt_disk5/ovirt_vol03 on port 49508 [2017-03-17 11:20:00.621392] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/ovirt_disk1/ovirt_vol03 on port 49504 [2017-03-17 11:20:00.621463] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/ovirt_disk7/ovirt_vol03 on port 49510 [2017-03-17 11:20:00.623651] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/ovirt_disk2/ovirt_vol03 on port 49505 [2017-03-17 11:20:00.623759] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/disk1/vmware2 on port 49512 [2017-03-17 11:20:00.623843] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/ovirt_disk6/ovirt_vol03 on port 49509 [2017-03-17 11:20:00.623921] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/ovirt_disk4/ovirt_vol03 on port 49507 [2017-03-17 11:20:00.623990] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/disk2/vmware2 on port 49513 [2017-03-17 11:20:00.625322] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/disk5/vmware2 on port 49516 [2017-03-17 11:20:00.627431] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/disk3/vmware2 on port 49514 [2017-03-17 11:20:00.630282] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/disk4/vmware2 on port 49515 [2017-03-17 11:20:00.630998] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/disk8/vmware2 on port 49519 [2017-03-17 11:20:00.633546] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/disk7/vmware2 on port 49518 [2017-03-17 11:20:00.634243] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/disk6/vmware2 on port 49517 [2017-03-17 11:20:00.636989] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/disk10/vmware2 on port 49521 [2017-03-17 11:20:00.637377] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /var/lib/glusterd/ss_brick on port 49522 [2017-03-17 11:20:00.638816] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/disk9/vmware2 on port 49520 [2017-03-17 11:20:00.902759] I [MSGID: 106493] [glusterd-rpc-ops.c:476:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: 0b388c89-ea8b-4e6a-8649-1d870d2bf3bc, host: gluster03, port: 0 [2017-03-17 11:20:00.904130] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: nfs already stopped [2017-03-17 11:20:00.904181] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: nfs service is stopped [2017-03-17 11:20:00.907672] I [MSGID: 106568] [glusterd-proc-mgmt.c:87:glusterd_proc_stop] 0-management: Stopping glustershd daemon running in pid: 3367 [2017-03-17 11:20:01.907853] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: glustershd service is stopped [2017-03-17 11:20:01.907933] I [MSGID: 106567] [glusterd-svc-mgmt.c:196:glusterd_svc_start] 0-management: Starting glustershd service [2017-03-17 11:20:01.911797] W [socket.c:3075:socket_connect] 0-glustershd: Ignore failed connection attempt on , (No such file or directory) [2017-03-17 11:20:01.911965] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad already stopped [2017-03-17 11:20:01.911989] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: quotad service is stopped [2017-03-17 11:20:01.912102] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2017-03-17 11:20:01.912121] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: bitd service is stopped [2017-03-17 11:20:01.912215] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2017-03-17 11:20:01.912227] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: scrub service is stopped [2017-03-17 11:20:01.912885] I [MSGID: 106492] [glusterd-handler.c:2784:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 0b388c89-ea8b-4e6a-8649-1d870d2bf3bc [2017-03-17 11:20:01.912949] I [MSGID: 106502] [glusterd-handler.c:2829:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2017-03-17 11:20:01.915243] I [MSGID: 106493] [glusterd-rpc-ops.c:691:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 0b388c89-ea8b-4e6a-8649-1d870d2bf3bc [2017-03-17 11:20:01.915366] I [MSGID: 106163] [glusterd-handshake.c:1271:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 30800 [2017-03-17 11:20:01.922949] I [MSGID: 106163] [glusterd-handshake.c:1271:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 30800 [2017-03-17 11:20:01.925677] I [MSGID: 106490] [glusterd-handler.c:2608:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: 3e37013d-4750-403e-bf02-305e34546d58 [2017-03-17 11:20:01.927078] I [MSGID: 106493] [glusterd-handler.c:3852:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to gluster04 (0), ret: 0, op_ret: 0 [2017-03-17 11:20:01.928767] I [MSGID: 106492] [glusterd-handler.c:2784:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 3e37013d-4750-403e-bf02-305e34546d58 [2017-03-17 11:20:01.928799] I [MSGID: 106502] [glusterd-handler.c:2829:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2017-03-17 11:20:01.930246] I [MSGID: 106493] [glusterd-rpc-ops.c:691:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 3e37013d-4750-403e-bf02-305e34546d58 [2017-03-17 11:20:01.932563] I [MSGID: 106490] [glusterd-handler.c:2608:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: 0b388c89-ea8b-4e6a-8649-1d870d2bf3bc [2017-03-17 11:20:01.934196] I [MSGID: 106493] [glusterd-handler.c:3852:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to gluster03 (0), ret: 0, op_ret: 0 [2017-03-17 11:20:01.936165] I [MSGID: 106492] [glusterd-handler.c:2784:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 0b388c89-ea8b-4e6a-8649-1d870d2bf3bc [2017-03-17 11:20:01.936211] I [MSGID: 106502] [glusterd-handler.c:2829:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2017-03-17 11:20:01.937736] I [MSGID: 106493] [glusterd-rpc-ops.c:691:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 0b388c89-ea8b-4e6a-8649-1d870d2bf3bc [2017-03-17 11:20:04.033805] I [MSGID: 106164] [glusterd-handshake.c:1326:__server_get_volume_info] 0-glusterd: Received get volume info req [2017-03-17 11:20:12.103473] I [MSGID: 106493] [glusterd-rpc-ops.c:476:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: ab091583-d6ee-48be-b0b4-99e1aabd843f, host: gluster02, port: 0 [2017-03-17 11:20:12.105660] I [MSGID: 106492] [glusterd-handler.c:2784:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: ab091583-d6ee-48be-b0b4-99e1aabd843f [2017-03-17 11:20:12.105710] I [MSGID: 106502] [glusterd-handler.c:2829:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2017-03-17 11:20:12.107214] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: nfs already stopped [2017-03-17 11:20:12.107283] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: nfs service is stopped [2017-03-17 11:20:12.113587] I [MSGID: 106568] [glusterd-proc-mgmt.c:87:glusterd_proc_stop] 0-management: Stopping glustershd daemon running in pid: 4939 [2017-03-17 11:20:13.113748] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: glustershd service is stopped [2017-03-17 11:20:13.113833] I [MSGID: 106567] [glusterd-svc-mgmt.c:196:glusterd_svc_start] 0-management: Starting glustershd service [2017-03-17 11:20:13.117923] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad already stopped [2017-03-17 11:20:13.117983] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: quotad service is stopped [2017-03-17 11:20:13.118160] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2017-03-17 11:20:13.118182] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: bitd service is stopped [2017-03-17 11:20:13.118350] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2017-03-17 11:20:13.118377] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: scrub service is stopped [2017-03-17 11:20:13.119024] I [MSGID: 106493] [glusterd-rpc-ops.c:691:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: ab091583-d6ee-48be-b0b4-99e1aabd843f [2017-03-17 11:20:13.123159] I [MSGID: 106163] [glusterd-handshake.c:1271:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 30800 [2017-03-17 11:20:13.130375] I [MSGID: 106490] [glusterd-handler.c:2608:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: ab091583-d6ee-48be-b0b4-99e1aabd843f [2017-03-17 11:20:13.131823] I [MSGID: 106493] [glusterd-handler.c:3852:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to gluster02 (0), ret: 0, op_ret: 0 [2017-03-17 11:20:13.133726] I [MSGID: 106492] [glusterd-handler.c:2784:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: ab091583-d6ee-48be-b0b4-99e1aabd843f [2017-03-17 11:20:13.133756] I [MSGID: 106502] [glusterd-handler.c:2829:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend [2017-03-17 11:20:13.135195] I [MSGID: 106493] [glusterd-rpc-ops.c:691:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: ab091583-d6ee-48be-b0b4-99e1aabd843f [2017-03-17 11:26:33.647296] I [MSGID: 106482] [glusterd-brick-ops.c:447:__glusterd_handle_add_brick] 0-management: Received add brick req [2017-03-17 11:26:33.647370] I [MSGID: 106578] [glusterd-brick-ops.c:499:__glusterd_handle_add_brick] 0-management: replica-count is 2 [2017-03-17 11:26:33.655216] W [MSGID: 106122] [glusterd-mgmt.c:188:gd_mgmt_v3_pre_validate_fn] 0-management: ADD-brick prevalidation failed. [2017-03-17 11:26:33.655252] E [MSGID: 106122] [glusterd-mgmt.c:884:glusterd_mgmt_v3_pre_validate] 0-management: Pre Validation failed for operation Add brick on local node [2017-03-17 11:26:33.655267] E [MSGID: 106122] [glusterd-mgmt.c:2009:glusterd_mgmt_v3_initiate_all_phases] 0-management: Pre Validation Failed [2017-03-17 11:27:07.714781] I [run.c:191:runner_log] (-->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x33015) [0x7fa9b85e2015] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0xcbf05) [0x7fa9b867af05] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7fa9c3e9d235] ) 0-management: Ran script: /var/lib/glusterd/hooks/1/add-brick/pre/S28Quota-enable-root-xattr-heal.sh --volname=vmware2 --version=1 --volume-op=add-brick --gd-workdir=/var/lib/glusterd [2017-03-17 11:27:07.687113] I [MSGID: 106578] [glusterd-brick-ops.c:499:__glusterd_handle_add_brick] 0-management: replica-count is 2 [2017-03-17 11:27:07.714882] I [MSGID: 106578] [glusterd-brick-ops.c:1352:glusterd_op_perform_add_bricks] 0-management: replica-count is set 0 [2017-03-17 11:27:07.714924] I [MSGID: 106578] [glusterd-brick-ops.c:1362:glusterd_op_perform_add_bricks] 0-management: type is set 0, need to change it [2017-03-17 11:27:07.827909] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/disk11/vmware2 on port 49523 [2017-03-17 11:27:07.828954] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-17 11:27:07.829291] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: nfs already stopped [2017-03-17 11:27:07.829333] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: nfs service is stopped [2017-03-17 11:27:07.836191] I [MSGID: 106568] [glusterd-proc-mgmt.c:87:glusterd_proc_stop] 0-management: Stopping glustershd daemon running in pid: 10495 [2017-03-17 11:27:08.836392] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: glustershd service is stopped [2017-03-17 11:27:08.836454] I [MSGID: 106567] [glusterd-svc-mgmt.c:196:glusterd_svc_start] 0-management: Starting glustershd service [2017-03-17 11:27:08.842080] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2017-03-17 11:27:08.842141] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: bitd service is stopped [2017-03-17 11:27:08.842336] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2017-03-17 11:27:08.842362] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: scrub service is stopped [2017-03-17 11:27:07.687083] I [MSGID: 106482] [glusterd-brick-ops.c:447:__glusterd_handle_add_brick] 0-management: Received add brick req [2017-03-17 11:30:24.627490] I [MSGID: 106539] [glusterd-utils.c:10132:glusterd_generate_and_set_task_id] 0-management: Generated task-id ee992bc8-c996-48a2-9db5-122cbbf1cdd4 for key rebalance-id [2017-03-17 11:30:29.634416] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-17 11:30:34.642344] E [MSGID: 106062] [glusterd-utils.c:9185:glusterd_volume_rebalance_use_rsp_dict] 0-glusterd: failed to get index [2017-03-17 11:30:35.234790] I [MSGID: 106172] [glusterd-handshake.c:977:__server_event_notify] 0-glusterd: received defrag status updated [2017-03-17 11:30:35.253335] W [socket.c:590:__socket_rwv] 0-management: readv on /var/run/gluster/gluster-rebalance-02328d46-a285-4533-aa3a-fb9bfeb688bf.sock failed (No data available) [2017-03-17 11:30:35.254636] I [MSGID: 106007] [glusterd-rebalance.c:157:__glusterd_defrag_notify] 0-management: Rebalance process for volume vmware2 has disconnected. [2017-03-17 11:30:35.254685] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy] 0-management: size=588 max=0 total=0 [2017-03-17 11:30:35.254700] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy] 0-management: size=124 max=0 total=0 The message "E [MSGID: 106062] [glusterd-utils.c:9185:glusterd_volume_rebalance_use_rsp_dict] 0-glusterd: failed to get index" repeated 2 times between [2017-03-17 11:30:34.642344] and [2017-03-17 11:30:34.642592] [2017-03-17 11:35:10.717577] I [MSGID: 106539] [glusterd-utils.c:10132:glusterd_generate_and_set_task_id] 0-management: Generated task-id 966b474d-3561-4927-9e41-eff049796bf5 for key rebalance-id [2017-03-17 11:35:15.724490] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-17 11:35:20.732576] E [MSGID: 106062] [glusterd-utils.c:9185:glusterd_volume_rebalance_use_rsp_dict] 0-glusterd: failed to get index The message "E [MSGID: 106062] [glusterd-utils.c:9185:glusterd_volume_rebalance_use_rsp_dict] 0-glusterd: failed to get index" repeated 2 times between [2017-03-17 11:35:20.732576] and [2017-03-17 11:35:20.732738] [2017-03-17 11:36:29.356251] I [MSGID: 106172] [glusterd-handshake.c:977:__server_event_notify] 0-glusterd: received defrag status updated [2017-03-17 11:36:29.388038] W [socket.c:590:__socket_rwv] 0-management: readv on /var/run/gluster/gluster-rebalance-02328d46-a285-4533-aa3a-fb9bfeb688bf.sock failed (No data available) [2017-03-17 11:36:29.389202] I [MSGID: 106007] [glusterd-rebalance.c:157:__glusterd_defrag_notify] 0-management: Rebalance process for volume vmware2 has disconnected. [2017-03-17 11:36:29.389241] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy] 0-management: size=588 max=1 total=8 [2017-03-17 11:36:29.389256] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy] 0-management: size=124 max=1 total=8 [2017-03-17 11:40:51.591044] I [MSGID: 106484] [glusterd-brick-ops.c:913:__glusterd_handle_remove_brick] 0-management: Received rem brick req [2017-03-17 11:40:51.591134] I [MSGID: 106062] [glusterd-brick-ops.c:991:__glusterd_handle_remove_brick] 0-management: request to change replica-count to 2 [2017-03-17 11:40:51.598154] I [MSGID: 106539] [glusterd-utils.c:10132:glusterd_generate_and_set_task_id] 0-management: Generated task-id 604bab39-20e0-463a-81d6-39b0d111f34d for key remove-brick-id [2017-03-17 11:40:51.601674] I [MSGID: 106062] [glusterd-op-sm.c:5985:glusterd_bricks_select_remove_brick] 0-management: force flag is not set [2017-03-17 11:40:51.770398] W [dict.c:1390:dict_get_with_ref] (-->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x87eb9) [0x7fa9b8636eb9] -->/lib64/libglusterfs.so.0(dict_get_str_boolean+0x32) [0x7fa9c3e4c3d2] -->/lib64/libglusterfs.so.0(+0x2219e) [0x7fa9c3e4a19e] ) 0-dict: dict OR key (graph-check) is NULL [Invalid argument] [2017-03-17 11:40:51.775429] W [dict.c:1390:dict_get_with_ref] (-->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x87eb9) [0x7fa9b8636eb9] -->/lib64/libglusterfs.so.0(dict_get_str_boolean+0x32) [0x7fa9c3e4c3d2] -->/lib64/libglusterfs.so.0(+0x2219e) [0x7fa9c3e4a19e] ) 0-dict: dict OR key (graph-check) is NULL [Invalid argument] [2017-03-17 11:40:51.804735] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad already stopped [2017-03-17 11:40:51.804783] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: quotad service is stopped [2017-03-17 11:40:51.805114] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2017-03-17 11:40:51.805133] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: bitd service is stopped [2017-03-17 11:40:51.805431] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2017-03-17 11:40:51.805449] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: scrub service is stopped [2017-03-17 11:40:56.811838] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-17 11:42:57.143860] I [MSGID: 106172] [glusterd-handshake.c:977:__server_event_notify] 0-glusterd: received defrag status updated [2017-03-17 11:42:57.176471] W [socket.c:590:__socket_rwv] 0-management: readv on /var/run/gluster/gluster-rebalance-02328d46-a285-4533-aa3a-fb9bfeb688bf.sock failed (No data available) [2017-03-17 11:42:57.177978] I [MSGID: 106007] [glusterd-rebalance.c:157:__glusterd_defrag_notify] 0-management: Rebalance process for volume vmware2 has disconnected. [2017-03-17 11:42:57.178022] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy] 0-management: size=588 max=1 total=22 [2017-03-17 11:42:57.178040] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy] 0-management: size=124 max=1 total=22 [2017-03-17 11:44:07.107253] I [MSGID: 106484] [glusterd-brick-ops.c:913:__glusterd_handle_remove_brick] 0-management: Received rem brick req [2017-03-17 11:44:07.107344] I [MSGID: 106062] [glusterd-brick-ops.c:991:__glusterd_handle_remove_brick] 0-management: request to change replica-count to 2 [2017-03-17 11:44:07.118718] I [MSGID: 106062] [glusterd-op-sm.c:5985:glusterd_bricks_select_remove_brick] 0-management: force flag is not set [2017-03-17 11:44:07.268747] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: nfs already stopped [2017-03-17 11:44:07.268786] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: nfs service is stopped [2017-03-17 11:44:07.272383] I [MSGID: 106568] [glusterd-proc-mgmt.c:87:glusterd_proc_stop] 0-management: Stopping glustershd daemon running in pid: 14929 [2017-03-17 11:44:08.272559] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: glustershd service is stopped [2017-03-17 11:44:08.272632] I [MSGID: 106567] [glusterd-svc-mgmt.c:196:glusterd_svc_start] 0-management: Starting glustershd service [2017-03-17 11:44:08.279975] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2017-03-17 11:44:08.280046] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: bitd service is stopped [2017-03-17 11:44:08.280272] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2017-03-17 11:44:08.280301] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: scrub service is stopped [2017-03-17 11:44:08.285014] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy] 0-management: size=588 max=0 total=0 [2017-03-17 11:44:08.285066] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy] 0-management: size=124 max=0 total=0 [2017-03-17 11:44:08.285173] I [MSGID: 106144] [glusterd-pmap.c:295:pmap_registry_remove] 0-pmap: removing brick /mnt/disk11/vmware2 on port 49523 [2017-03-17 11:44:08.285220] W [socket.c:590:__socket_rwv] 0-socket.management: writev on 192.168.209.194:49147 failed (Broken pipe) [2017-03-17 11:44:08.285247] I [socket.c:2403:socket_event_handler] 0-transport: disconnecting now [2017-03-17 11:44:37.531627] I [MSGID: 106499] [glusterd-handler.c:4349:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vmware2 [2017-03-17 11:47:46.698680] I [MSGID: 106499] [glusterd-handler.c:4349:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vmware2 [2017-03-17 11:48:35.678470] I [MSGID: 106482] [glusterd-brick-ops.c:447:__glusterd_handle_add_brick] 0-management: Received add brick req [2017-03-17 11:48:35.678557] I [MSGID: 106578] [glusterd-brick-ops.c:499:__glusterd_handle_add_brick] 0-management: replica-count is 2 [2017-03-17 11:48:35.693817] E [MSGID: 106116] [glusterd-mgmt.c:135:gd_mgmt_v3_collate_errors] 0-management: Pre Validation failed on gluster02. /mnt/disk11/vmware2 is already part of a volume [2017-03-17 11:48:35.693902] E [MSGID: 106116] [glusterd-mgmt.c:135:gd_mgmt_v3_collate_errors] 0-management: Pre Validation failed on gluster04. /mnt/disk11/vmware2 is already part of a volume [2017-03-17 11:48:35.694052] E [MSGID: 106116] [glusterd-mgmt.c:135:gd_mgmt_v3_collate_errors] 0-management: Pre Validation failed on gluster03. /mnt/disk11/vmware2 is already part of a volume [2017-03-17 11:48:35.694257] E [MSGID: 106122] [glusterd-mgmt.c:947:glusterd_mgmt_v3_pre_validate] 0-management: Pre Validation failed on peers [2017-03-17 11:48:35.694297] E [MSGID: 106122] [glusterd-mgmt.c:2009:glusterd_mgmt_v3_initiate_all_phases] 0-management: Pre Validation Failed [2017-03-17 11:50:03.904735] I [MSGID: 106482] [glusterd-brick-ops.c:447:__glusterd_handle_add_brick] 0-management: Received add brick req [2017-03-17 11:50:03.904830] I [MSGID: 106578] [glusterd-brick-ops.c:499:__glusterd_handle_add_brick] 0-management: replica-count is 2 [2017-03-17 11:50:03.911014] E [MSGID: 106451] [glusterd-utils.c:6197:glusterd_is_path_in_use] 0-management: /mnt/disk11/vmware2 is already part of a volume [No data available] [2017-03-17 11:50:03.911320] W [MSGID: 106122] [glusterd-mgmt.c:188:gd_mgmt_v3_pre_validate_fn] 0-management: ADD-brick prevalidation failed. [2017-03-17 11:50:03.911338] E [MSGID: 106122] [glusterd-mgmt.c:884:glusterd_mgmt_v3_pre_validate] 0-management: Pre Validation failed for operation Add brick on local node [2017-03-17 11:50:03.911351] E [MSGID: 106122] [glusterd-mgmt.c:2009:glusterd_mgmt_v3_initiate_all_phases] 0-management: Pre Validation Failed [2017-03-17 11:50:17.235524] I [run.c:191:runner_log] (-->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x33015) [0x7fa9b85e2015] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0xcbf05) [0x7fa9b867af05] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7fa9c3e9d235] ) 0-management: Ran script: /var/lib/glusterd/hooks/1/add-brick/pre/S28Quota-enable-root-xattr-heal.sh --volname=vmware2 --version=1 --volume-op=add-brick --gd-workdir=/var/lib/glusterd [2017-03-17 11:50:17.203007] I [MSGID: 106578] [glusterd-brick-ops.c:499:__glusterd_handle_add_brick] 0-management: replica-count is 2 [2017-03-17 11:50:17.235619] I [MSGID: 106578] [glusterd-brick-ops.c:1352:glusterd_op_perform_add_bricks] 0-management: replica-count is set 0 [2017-03-17 11:50:17.235659] I [MSGID: 106578] [glusterd-brick-ops.c:1362:glusterd_op_perform_add_bricks] 0-management: type is set 0, need to change it [2017-03-17 11:50:17.357621] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/disk11/vmware2 on port 49523 [2017-03-17 11:50:17.358710] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-17 11:50:17.359137] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: nfs already stopped [2017-03-17 11:50:17.359198] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: nfs service is stopped [2017-03-17 11:50:17.366409] I [MSGID: 106568] [glusterd-proc-mgmt.c:87:glusterd_proc_stop] 0-management: Stopping glustershd daemon running in pid: 21883 [2017-03-17 11:50:18.366621] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: glustershd service is stopped [2017-03-17 11:50:18.366689] I [MSGID: 106567] [glusterd-svc-mgmt.c:196:glusterd_svc_start] 0-management: Starting glustershd service [2017-03-17 11:50:18.374098] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2017-03-17 11:50:18.374170] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: bitd service is stopped [2017-03-17 11:50:18.374376] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2017-03-17 11:50:18.374403] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: scrub service is stopped [2017-03-17 11:50:27.236944] I [MSGID: 106539] [glusterd-utils.c:10132:glusterd_generate_and_set_task_id] 0-management: Generated task-id 07e22264-d31a-4bcd-b994-1d72be296c14 for key rebalance-id [2017-03-17 11:50:32.245397] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-17 11:50:37.255067] E [MSGID: 106062] [glusterd-utils.c:9185:glusterd_volume_rebalance_use_rsp_dict] 0-glusterd: failed to get index The message "E [MSGID: 106062] [glusterd-utils.c:9185:glusterd_volume_rebalance_use_rsp_dict] 0-glusterd: failed to get index" repeated 2 times between [2017-03-17 11:50:37.255067] and [2017-03-17 11:50:37.255590] [2017-03-17 11:50:37.778836] I [MSGID: 106172] [glusterd-handshake.c:977:__server_event_notify] 0-glusterd: received defrag status updated [2017-03-17 11:50:37.778888] E [MSGID: 106224] [glusterd-rebalance.c:1130:glusterd_defrag_event_notify_handle] 0-management: Failed to update status [2017-03-17 11:50:37.798264] W [socket.c:590:__socket_rwv] 0-management: readv on /var/run/gluster/gluster-rebalance-02328d46-a285-4533-aa3a-fb9bfeb688bf.sock failed (No data available) [2017-03-17 11:50:37.799556] I [MSGID: 106007] [glusterd-rebalance.c:157:__glusterd_defrag_notify] 0-management: Rebalance process for volume vmware2 has disconnected. [2017-03-17 11:50:37.799585] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy] 0-management: size=588 max=0 total=0 [2017-03-17 11:50:37.799595] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy] 0-management: size=124 max=0 total=0 [2017-03-17 11:50:58.543133] I [MSGID: 106539] [glusterd-utils.c:10132:glusterd_generate_and_set_task_id] 0-management: Generated task-id 116e7f14-4b84-4faa-8f20-04d17b83842f for key rebalance-id [2017-03-17 11:51:03.551433] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-17 11:51:08.561050] E [MSGID: 106062] [glusterd-utils.c:9185:glusterd_volume_rebalance_use_rsp_dict] 0-glusterd: failed to get index The message "E [MSGID: 106062] [glusterd-utils.c:9185:glusterd_volume_rebalance_use_rsp_dict] 0-glusterd: failed to get index" repeated 2 times between [2017-03-17 11:51:08.561050] and [2017-03-17 11:51:08.561230] [2017-03-17 11:51:32.650826] I [MSGID: 106499] [glusterd-handler.c:4349:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vmware2 [2017-03-17 11:50:17.202971] I [MSGID: 106482] [glusterd-brick-ops.c:447:__glusterd_handle_add_brick] 0-management: Received add brick req [2017-03-17 11:52:22.894364] I [MSGID: 106172] [glusterd-handshake.c:977:__server_event_notify] 0-glusterd: received defrag status updated [2017-03-17 11:52:22.932584] W [socket.c:590:__socket_rwv] 0-management: readv on /var/run/gluster/gluster-rebalance-02328d46-a285-4533-aa3a-fb9bfeb688bf.sock failed (No data available) [2017-03-17 11:52:22.933910] I [MSGID: 106007] [glusterd-rebalance.c:157:__glusterd_defrag_notify] 0-management: Rebalance process for volume vmware2 has disconnected. [2017-03-17 11:52:22.933946] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy] 0-management: size=588 max=1 total=16 [2017-03-17 11:52:22.933958] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy] 0-management: size=124 max=1 total=16 [2017-03-17 12:50:34.841087] I [MSGID: 106484] [glusterd-brick-ops.c:913:__glusterd_handle_remove_brick] 0-management: Received rem brick req [2017-03-17 12:50:34.841329] I [MSGID: 106062] [glusterd-brick-ops.c:991:__glusterd_handle_remove_brick] 0-management: request to change replica-count to 2 [2017-03-17 12:50:34.848566] I [MSGID: 106539] [glusterd-utils.c:10132:glusterd_generate_and_set_task_id] 0-management: Generated task-id 2a992f14-f884-45ce-875d-87a87e995803 for key remove-brick-id [2017-03-17 12:50:34.852421] I [MSGID: 106062] [glusterd-op-sm.c:5985:glusterd_bricks_select_remove_brick] 0-management: force flag is not set [2017-03-17 12:50:35.009746] W [dict.c:1390:dict_get_with_ref] (-->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x87eb9) [0x7fa9b8636eb9] -->/lib64/libglusterfs.so.0(dict_get_str_boolean+0x32) [0x7fa9c3e4c3d2] -->/lib64/libglusterfs.so.0(+0x2219e) [0x7fa9c3e4a19e] ) 0-dict: dict OR key (graph-check) is NULL [Invalid argument] [2017-03-17 12:50:35.012744] W [dict.c:1390:dict_get_with_ref] (-->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x87eb9) [0x7fa9b8636eb9] -->/lib64/libglusterfs.so.0(dict_get_str_boolean+0x32) [0x7fa9c3e4c3d2] -->/lib64/libglusterfs.so.0(+0x2219e) [0x7fa9c3e4a19e] ) 0-dict: dict OR key (graph-check) is NULL [Invalid argument] [2017-03-17 12:50:35.034281] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad already stopped [2017-03-17 12:50:35.034310] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: quotad service is stopped [2017-03-17 12:50:35.034504] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2017-03-17 12:50:35.034517] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: bitd service is stopped [2017-03-17 12:50:35.034704] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2017-03-17 12:50:35.034716] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: scrub service is stopped [2017-03-17 12:50:40.042002] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-17 12:52:33.355827] I [MSGID: 106172] [glusterd-handshake.c:977:__server_event_notify] 0-glusterd: received defrag status updated [2017-03-17 12:52:33.387396] W [socket.c:590:__socket_rwv] 0-management: readv on /var/run/gluster/gluster-rebalance-02328d46-a285-4533-aa3a-fb9bfeb688bf.sock failed (No data available) [2017-03-17 12:52:33.389075] I [MSGID: 106007] [glusterd-rebalance.c:157:__glusterd_defrag_notify] 0-management: Rebalance process for volume vmware2 has disconnected. [2017-03-17 12:52:33.389124] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy] 0-management: size=588 max=1 total=8 [2017-03-17 12:52:33.389140] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy] 0-management: size=124 max=1 total=8 [2017-03-17 12:52:58.914097] I [MSGID: 106484] [glusterd-brick-ops.c:913:__glusterd_handle_remove_brick] 0-management: Received rem brick req [2017-03-17 12:52:58.914195] I [MSGID: 106062] [glusterd-brick-ops.c:991:__glusterd_handle_remove_brick] 0-management: request to change replica-count to 2 [2017-03-17 12:52:58.924569] I [MSGID: 106062] [glusterd-op-sm.c:5985:glusterd_bricks_select_remove_brick] 0-management: force flag is not set [2017-03-17 12:52:59.077791] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: nfs already stopped [2017-03-17 12:52:59.077837] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: nfs service is stopped [2017-03-17 12:52:59.084087] I [MSGID: 106568] [glusterd-proc-mgmt.c:87:glusterd_proc_stop] 0-management: Stopping glustershd daemon running in pid: 24406 [2017-03-17 12:53:00.084262] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: glustershd service is stopped [2017-03-17 12:53:00.084327] I [MSGID: 106567] [glusterd-svc-mgmt.c:196:glusterd_svc_start] 0-management: Starting glustershd service [2017-03-17 12:53:00.093306] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2017-03-17 12:53:00.093392] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: bitd service is stopped [2017-03-17 12:53:00.093612] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2017-03-17 12:53:00.093638] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: scrub service is stopped [2017-03-17 12:53:00.098059] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy] 0-management: size=588 max=0 total=0 [2017-03-17 12:53:00.098104] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy] 0-management: size=124 max=0 total=0 [2017-03-17 12:53:00.098209] I [MSGID: 106144] [glusterd-pmap.c:295:pmap_registry_remove] 0-pmap: removing brick /mnt/disk11/vmware2 on port 49523 [2017-03-17 12:53:00.098252] W [socket.c:590:__socket_rwv] 0-socket.management: writev on 192.168.209.194:49147 failed (Broken pipe) [2017-03-17 12:53:00.098267] I [socket.c:2403:socket_event_handler] 0-transport: disconnecting now [2017-03-17 12:53:25.021280] I [MSGID: 106499] [glusterd-handler.c:4349:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vmware2 [2017-03-17 13:11:36.858697] I [MSGID: 106482] [glusterd-brick-ops.c:447:__glusterd_handle_add_brick] 0-management: Received add brick req [2017-03-17 13:11:36.858805] I [MSGID: 106578] [glusterd-brick-ops.c:499:__glusterd_handle_add_brick] 0-management: replica-count is 2 [2017-03-17 13:11:36.892919] I [run.c:191:runner_log] (-->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x33015) [0x7fa9b85e2015] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0xcbf05) [0x7fa9b867af05] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7fa9c3e9d235] ) 0-management: Ran script: /var/lib/glusterd/hooks/1/add-brick/pre/S28Quota-enable-root-xattr-heal.sh --volname=vmware2 --version=1 --volume-op=add-brick --gd-workdir=/var/lib/glusterd [2017-03-17 13:11:36.893035] I [MSGID: 106578] [glusterd-brick-ops.c:1352:glusterd_op_perform_add_bricks] 0-management: replica-count is set 0 [2017-03-17 13:11:36.893066] I [MSGID: 106578] [glusterd-brick-ops.c:1362:glusterd_op_perform_add_bricks] 0-management: type is set 0, need to change it [2017-03-17 13:11:37.017215] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/disk11/vmware2 on port 49523 [2017-03-17 13:11:37.018255] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-17 13:11:37.018759] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: nfs already stopped [2017-03-17 13:11:37.018834] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: nfs service is stopped [2017-03-17 13:11:37.026530] I [MSGID: 106568] [glusterd-proc-mgmt.c:87:glusterd_proc_stop] 0-management: Stopping glustershd daemon running in pid: 47797 [2017-03-17 13:11:38.026725] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: glustershd service is stopped [2017-03-17 13:11:38.026826] I [MSGID: 106567] [glusterd-svc-mgmt.c:196:glusterd_svc_start] 0-management: Starting glustershd service [2017-03-17 13:11:38.035891] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2017-03-17 13:11:38.035980] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: bitd service is stopped [2017-03-17 13:11:38.036218] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2017-03-17 13:11:38.036273] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: scrub service is stopped [2017-03-17 13:14:02.062855] I [MSGID: 106539] [glusterd-utils.c:10132:glusterd_generate_and_set_task_id] 0-management: Generated task-id c6d3cf35-4961-4c0c-9824-bbbd2f966a74 for key rebalance-id [2017-03-17 13:14:07.073481] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-17 13:14:12.084526] E [MSGID: 106062] [glusterd-utils.c:9185:glusterd_volume_rebalance_use_rsp_dict] 0-glusterd: failed to get index [2017-03-17 13:14:12.539808] I [MSGID: 106172] [glusterd-handshake.c:977:__server_event_notify] 0-glusterd: received defrag status updated [2017-03-17 13:14:12.539858] E [MSGID: 106224] [glusterd-rebalance.c:1130:glusterd_defrag_event_notify_handle] 0-management: Failed to update status [2017-03-17 13:14:12.554955] W [socket.c:590:__socket_rwv] 0-management: readv on /var/run/gluster/gluster-rebalance-02328d46-a285-4533-aa3a-fb9bfeb688bf.sock failed (No data available) [2017-03-17 13:14:12.556105] I [MSGID: 106007] [glusterd-rebalance.c:157:__glusterd_defrag_notify] 0-management: Rebalance process for volume vmware2 has disconnected. [2017-03-17 13:14:12.556136] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy] 0-management: size=588 max=0 total=0 [2017-03-17 13:14:12.556148] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy] 0-management: size=124 max=0 total=0 The message "E [MSGID: 106062] [glusterd-utils.c:9185:glusterd_volume_rebalance_use_rsp_dict] 0-glusterd: failed to get index" repeated 2 times between [2017-03-17 13:14:12.084526] and [2017-03-17 13:14:12.085027] [2017-03-17 13:16:32.227224] I [MSGID: 106539] [glusterd-utils.c:10132:glusterd_generate_and_set_task_id] 0-management: Generated task-id c7c3ddaa-0ad3-4d00-824d-fcb5f149f7ff for key rebalance-id [2017-03-17 13:16:37.238063] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-17 13:16:42.248957] E [MSGID: 106062] [glusterd-utils.c:9185:glusterd_volume_rebalance_use_rsp_dict] 0-glusterd: failed to get index [2017-03-17 13:17:02.393588] I [MSGID: 106172] [glusterd-handshake.c:977:__server_event_notify] 0-glusterd: received defrag status updated [2017-03-17 13:17:02.410969] W [socket.c:590:__socket_rwv] 0-management: readv on /var/run/gluster/gluster-rebalance-02328d46-a285-4533-aa3a-fb9bfeb688bf.sock failed (No data available) [2017-03-17 13:17:02.412232] I [MSGID: 106007] [glusterd-rebalance.c:157:__glusterd_defrag_notify] 0-management: Rebalance process for volume vmware2 has disconnected. [2017-03-17 13:17:02.412272] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy] 0-management: size=588 max=1 total=1 [2017-03-17 13:17:02.412285] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy] 0-management: size=124 max=1 total=1 The message "E [MSGID: 106062] [glusterd-utils.c:9185:glusterd_volume_rebalance_use_rsp_dict] 0-glusterd: failed to get index" repeated 2 times between [2017-03-17 13:16:42.248957] and [2017-03-17 13:16:42.249394] [2017-03-17 13:27:04.430285] I [MSGID: 106484] [glusterd-brick-ops.c:913:__glusterd_handle_remove_brick] 0-management: Received rem brick req [2017-03-17 13:27:04.430389] I [MSGID: 106062] [glusterd-brick-ops.c:991:__glusterd_handle_remove_brick] 0-management: request to change replica-count to 2 [2017-03-17 13:27:04.433033] E [MSGID: 106265] [glusterd-brick-ops.c:1170:__glusterd_handle_remove_brick] 0-management: Bricks not from same subvol for replica [2017-03-17 13:27:19.391422] I [MSGID: 106539] [glusterd-utils.c:10132:glusterd_generate_and_set_task_id] 0-management: Generated task-id 25131768-ed9f-48d7-85fc-4f47dc982177 for key remove-brick-id [2017-03-17 13:27:19.395277] I [MSGID: 106062] [glusterd-op-sm.c:5985:glusterd_bricks_select_remove_brick] 0-management: force flag is not set [2017-03-17 13:27:19.554744] W [dict.c:1390:dict_get_with_ref] (-->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x87eb9) [0x7fa9b8636eb9] -->/lib64/libglusterfs.so.0(dict_get_str_boolean+0x32) [0x7fa9c3e4c3d2] -->/lib64/libglusterfs.so.0(+0x2219e) [0x7fa9c3e4a19e] ) 0-dict: dict OR key (graph-check) is NULL [Invalid argument] [2017-03-17 13:27:19.557854] W [dict.c:1390:dict_get_with_ref] (-->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x87eb9) [0x7fa9b8636eb9] -->/lib64/libglusterfs.so.0(dict_get_str_boolean+0x32) [0x7fa9c3e4c3d2] -->/lib64/libglusterfs.so.0(+0x2219e) [0x7fa9c3e4a19e] ) 0-dict: dict OR key (graph-check) is NULL [Invalid argument] [2017-03-17 13:27:19.576381] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad already stopped [2017-03-17 13:27:19.576430] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: quotad service is stopped [2017-03-17 13:27:19.576619] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2017-03-17 13:27:19.576632] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: bitd service is stopped [2017-03-17 13:27:19.576820] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2017-03-17 13:27:19.576836] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: scrub service is stopped [2017-03-17 13:27:24.584595] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-17 13:27:42.108874] I [MSGID: 106172] [glusterd-handshake.c:977:__server_event_notify] 0-glusterd: received defrag status updated [2017-03-17 13:27:42.126490] W [socket.c:590:__socket_rwv] 0-management: readv on /var/run/gluster/gluster-rebalance-02328d46-a285-4533-aa3a-fb9bfeb688bf.sock failed (No data available) [2017-03-17 13:27:42.127470] I [MSGID: 106007] [glusterd-rebalance.c:157:__glusterd_defrag_notify] 0-management: Rebalance process for volume vmware2 has disconnected. [2017-03-17 13:27:42.127492] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy] 0-management: size=588 max=0 total=0 [2017-03-17 13:27:42.127501] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy] 0-management: size=124 max=0 total=0 [2017-03-17 13:27:19.384469] I [MSGID: 106484] [glusterd-brick-ops.c:913:__glusterd_handle_remove_brick] 0-management: Received rem brick req [2017-03-17 13:27:19.384520] I [MSGID: 106062] [glusterd-brick-ops.c:991:__glusterd_handle_remove_brick] 0-management: request to change replica-count to 2 [2017-03-17 13:28:02.005632] I [MSGID: 106484] [glusterd-brick-ops.c:913:__glusterd_handle_remove_brick] 0-management: Received rem brick req [2017-03-17 13:28:02.005723] I [MSGID: 106062] [glusterd-brick-ops.c:991:__glusterd_handle_remove_brick] 0-management: request to change replica-count to 2 [2017-03-17 13:28:02.015990] I [MSGID: 106062] [glusterd-op-sm.c:5985:glusterd_bricks_select_remove_brick] 0-management: force flag is not set [2017-03-17 13:28:02.151944] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: nfs already stopped [2017-03-17 13:28:02.151995] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: nfs service is stopped [2017-03-17 13:28:02.157588] I [MSGID: 106568] [glusterd-proc-mgmt.c:87:glusterd_proc_stop] 0-management: Stopping glustershd daemon running in pid: 6685 [2017-03-17 13:28:03.157723] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: glustershd service is stopped [2017-03-17 13:28:03.157781] I [MSGID: 106567] [glusterd-svc-mgmt.c:196:glusterd_svc_start] 0-management: Starting glustershd service [2017-03-17 13:28:03.164257] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2017-03-17 13:28:03.164299] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: bitd service is stopped [2017-03-17 13:28:03.164421] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2017-03-17 13:28:03.164437] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: scrub service is stopped [2017-03-17 13:28:03.168289] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy] 0-management: size=588 max=0 total=0 [2017-03-17 13:28:03.168316] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy] 0-management: size=124 max=0 total=0 [2017-03-17 13:28:03.168379] I [MSGID: 106144] [glusterd-pmap.c:295:pmap_registry_remove] 0-pmap: removing brick /mnt/disk11/vmware2 on port 49523 [2017-03-17 13:28:03.168408] W [socket.c:590:__socket_rwv] 0-socket.management: writev on 192.168.209.194:49147 failed (Broken pipe) [2017-03-17 13:28:03.168417] I [socket.c:2403:socket_event_handler] 0-transport: disconnecting now [2017-03-17 13:30:23.996874] I [MSGID: 106499] [glusterd-handler.c:4349:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vmware2 [2017-03-17 13:57:41.064423] I [MSGID: 106482] [glusterd-brick-ops.c:447:__glusterd_handle_add_brick] 0-management: Received add brick req [2017-03-17 13:57:41.064514] I [MSGID: 106578] [glusterd-brick-ops.c:499:__glusterd_handle_add_brick] 0-management: replica-count is 2 [2017-03-17 13:57:41.096702] I [run.c:191:runner_log] (-->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0x33015) [0x7fa9b85e2015] -->/usr/lib64/glusterfs/3.8.10/xlator/mgmt/glusterd.so(+0xcbf05) [0x7fa9b867af05] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7fa9c3e9d235] ) 0-management: Ran script: /var/lib/glusterd/hooks/1/add-brick/pre/S28Quota-enable-root-xattr-heal.sh --volname=vmware2 --version=1 --volume-op=add-brick --gd-workdir=/var/lib/glusterd [2017-03-17 13:57:41.096829] I [MSGID: 106578] [glusterd-brick-ops.c:1352:glusterd_op_perform_add_bricks] 0-management: replica-count is set 0 [2017-03-17 13:57:41.096882] I [MSGID: 106578] [glusterd-brick-ops.c:1362:glusterd_op_perform_add_bricks] 0-management: type is set 0, need to change it [2017-03-17 13:57:41.205728] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /mnt/disk11/vmware2 on port 49523 [2017-03-17 13:57:41.206645] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-17 13:57:41.207011] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: nfs already stopped [2017-03-17 13:57:41.207055] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: nfs service is stopped [2017-03-17 13:57:41.214158] I [MSGID: 106568] [glusterd-proc-mgmt.c:87:glusterd_proc_stop] 0-management: Stopping glustershd daemon running in pid: 13038 [2017-03-17 13:57:42.214350] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: glustershd service is stopped [2017-03-17 13:57:42.214420] I [MSGID: 106567] [glusterd-svc-mgmt.c:196:glusterd_svc_start] 0-management: Starting glustershd service [2017-03-17 13:57:42.224864] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2017-03-17 13:57:42.224925] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: bitd service is stopped [2017-03-17 13:57:42.225117] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2017-03-17 13:57:42.225142] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: scrub service is stopped [2017-03-17 13:59:27.523206] I [MSGID: 106539] [glusterd-utils.c:10132:glusterd_generate_and_set_task_id] 0-management: Generated task-id 1246c7f4-a3b6-40d9-ad21-44f7e334dddf for key rebalance-id [2017-03-17 13:59:32.534689] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-17 13:59:37.546601] E [MSGID: 106062] [glusterd-utils.c:9185:glusterd_volume_rebalance_use_rsp_dict] 0-glusterd: failed to get index [2017-03-17 13:59:38.016942] I [MSGID: 106172] [glusterd-handshake.c:977:__server_event_notify] 0-glusterd: received defrag status updated [2017-03-17 13:59:38.028439] W [socket.c:590:__socket_rwv] 0-management: readv on /var/run/gluster/gluster-rebalance-02328d46-a285-4533-aa3a-fb9bfeb688bf.sock failed (No data available) [2017-03-17 13:59:38.029706] I [MSGID: 106007] [glusterd-rebalance.c:157:__glusterd_defrag_notify] 0-management: Rebalance process for volume vmware2 has disconnected. [2017-03-17 13:59:38.029736] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy] 0-management: size=588 max=0 total=0 [2017-03-17 13:59:38.029745] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy] 0-management: size=124 max=0 total=0 The message "E [MSGID: 106062] [glusterd-utils.c:9185:glusterd_volume_rebalance_use_rsp_dict] 0-glusterd: failed to get index" repeated 2 times between [2017-03-17 13:59:37.546601] and [2017-03-17 13:59:37.547145] [2017-03-17 14:00:47.087860] I [MSGID: 106539] [glusterd-utils.c:10132:glusterd_generate_and_set_task_id] 0-management: Generated task-id e89948ac-882c-476b-a229-cca2838b43cf for key rebalance-id [2017-03-17 14:00:52.099347] I [rpc-clnt.c:1046:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2017-03-17 14:00:57.111794] E [MSGID: 106062] [glusterd-utils.c:9185:glusterd_volume_rebalance_use_rsp_dict] 0-glusterd: failed to get index [2017-03-17 14:01:18.462574] I [MSGID: 106172] [glusterd-handshake.c:977:__server_event_notify] 0-glusterd: received defrag status updated [2017-03-17 14:01:18.481364] W [socket.c:590:__socket_rwv] 0-management: readv on /var/run/gluster/gluster-rebalance-02328d46-a285-4533-aa3a-fb9bfeb688bf.sock failed (No data available) [2017-03-17 14:01:18.482878] I [MSGID: 106007] [glusterd-rebalance.c:157:__glusterd_defrag_notify] 0-management: Rebalance process for volume vmware2 has disconnected. [2017-03-17 14:01:18.482918] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy] 0-management: size=588 max=1 total=1 [2017-03-17 14:01:18.482935] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy] 0-management: size=124 max=1 total=1 The message "E [MSGID: 106062] [glusterd-utils.c:9185:glusterd_volume_rebalance_use_rsp_dict] 0-glusterd: failed to get index" repeated 2 times between [2017-03-17 14:00:57.111794] and [2017-03-17 14:00:57.112167] [2017-03-17 14:14:12.505444] I [MSGID: 106488] [glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 0-management: Received get vol req
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-users