Re: Failed to provision volume with StorageClass "glusterfs-storage": glusterfs: server busy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Madhu,

Sorry to disturb could you please provide atleast work around (to clear requests which stuck) to move further.
We are also not able to find root cause from glusterd logs. Please find attachment.

BR
Salam

 



From:        Shaik Salam/HYD/TCS
To:        "Madhu Rajanna" <mrajanna@xxxxxxxxxx>
Cc:        "gluster-users@xxxxxxxxxxx List" <gluster-users@xxxxxxxxxxx>, "Michael Adam" <madam@xxxxxxxxxx>
Date:        01/24/2019 04:12 PM
Subject:        Re: Failed to provision volume with StorageClass "glusterfs-storage": glusterfs: server busy



Hi Madhu,

Please let me know If any other information required.

BR
Salam




From:        Shaik Salam/HYD/TCS
To:        "Madhu Rajanna" <mrajanna@xxxxxxxxxx>
Cc:        "gluster-users@xxxxxxxxxxx List" <gluster-users@xxxxxxxxxxx>, "Michael Adam" <madam@xxxxxxxxxx>
Date:        01/24/2019 03:23 PM
Subject:        Re: Failed to provision volume with StorageClass "glusterfs-storage": glusterfs: server busy



Hi Madhu,

This is complete one after restart of heketi pod and process log.

BR
Salam

[attachment "heketi-pod-complete.log" deleted by Shaik Salam/HYD/TCS]  [attachment "ps-aux.txt" deleted by Shaik Salam/HYD/TCS]




From:        "Madhu Rajanna" <mrajanna@xxxxxxxxxx>
To:        "Shaik Salam" <shaik.salam@xxxxxxx>
Cc:        "gluster-users@xxxxxxxxxxx List" <gluster-users@xxxxxxxxxxx>, "Michael Adam" <madam@xxxxxxxxxx>
Date:        01/24/2019 01:55 PM
Subject:        Re: Failed to provision volume with StorageClass "glusterfs-storage": glusterfs: server busy




"External email. Open with Caution"
the logs you provided is not complete, not able to figure out which command is struck, can you reattach the complete output of `ps aux` and also attach complete heketi logs. 

On Thu, Jan 24, 2019 at 1:41 PM Shaik Salam <shaik.salam@xxxxxxx> wrote:
Hi Madhu,

Please find requested info.


BR

Salam


 




From:        
Madhu Rajanna <mrajanna@xxxxxxxxxx>
To:        
Shaik Salam <shaik.salam@xxxxxxx>
Cc:        
"gluster-users@xxxxxxxxxxx List" <gluster-users@xxxxxxxxxxx>, Michael Adam <madam@xxxxxxxxxx>
Date:        
01/24/2019 01:33 PM
Subject:        
Re: Failed to provision volume with StorageClass "glusterfs-storage": glusterfs: server busy




"External email. Open with Caution"

the heketi logs you have attached is not complete i believe, can you povide  the complete heketi logs
and also an we get the output of "ps aux" from the gluster pods ? I want to see if any lvm commands or gluster commands are "stuck".


On Thu, Jan 24, 2019 at 1:16 PM Shaik Salam <
shaik.salam@xxxxxxx> wrote:
Hi Madhu.


I tried lot of times restarted heketi pod but not resolved.


sh-4.4# heketi-cli server operations info

Operation Counts:

  Total: 0

  In-Flight: 0

  New: 0

  Stale: 0


Now you can see all operations are zero. Now I try to create single volume below is observation in-flight reaching slowly to 8.


sh-4.4# heketi-cli server operations infoCLI_SERVER=
http://localhost:8080 ; export HEKETI_CLI_USE                                       Operation Counts:
  Total: 0

  In-Flight: 6

  New: 0

  Stale: 0

sh-4.4# heketi-cli server operations info

Operation Counts:

  Total: 0

  In-Flight: 7

  New: 0

  Stale: 0

sh-4.4# heketi-cli server operations info

Operation Counts:

  Total: 0

  In-Flight: 7

  New: 0

  Stale: 0

sh-4.4# heketi-cli server operations info

Operation Counts:

  Total: 0

  In-Flight: 7

  New: 0

  Stale: 0

sh-4.4# heketi-cli server operations info

Operation Counts:

  Total: 0

  In-Flight: 7

  New: 0

  Stale: 0


[negroni] Completed 200 OK in 186.286µs

[negroni] Started POST /volumes

[negroni] Started GET /operations

[negroni] Completed 200 OK in 166.294µs

[negroni] Started GET /operations

[negroni] Completed 200 OK in 186.411µs

[negroni] Started GET /operations

[negroni] Completed 200 OK in 179.796µs

[negroni] Started POST /volumes

[negroni] Started POST /volumes

[negroni] Started POST /volumes

[negroni] Started POST /volumes

[negroni] Started GET /operations

[negroni] Completed 200 OK in 131.108µs

[negroni] Started POST /volumes

[negroni] Started GET /operations

[negroni] Completed 200 OK in 111.392µs

[negroni] Started GET /operations

[negroni] Completed 200 OK in 265.023µs

[negroni] Started GET /operations

[negroni] Completed 200 OK in 179.364µs

[negroni] Started GET /operations

[negroni] Completed 200 OK in 295.058µs

[negroni] Started GET /operations

[negroni] Completed 200 OK in 146.857µs

[negroni] Started POST /volumes

[negroni] Started POST /volumes

[heketi] WARNING 2019/01/24 07:43:36 operations in-flight (8) exceeds limit (8)

[negroni] Completed 429 Too Many Requests in 403.166µs

[negroni] Started POST /volumes

[heketi] WARNING 2019/01/24 07:43:51 operations in-flight (8) exceeds limit (8)

[negroni] Completed 429 Too Many Requests in 193.554µs



But for pod volume is not creating.
1:15:36 PM
Warning
Provisioning failed  Failed to provision volume with StorageClass "glusterfs-storage": glusterfs: create volume err: error creating volume Server busy. Retry operation later..
9 times in the last 2 minutes
1:13:21 PM
Warning
Provisioning failed  Failed to provision volume with StorageClass "glusterfs-storage": glusterfs: create volume err: error creating volume .
8 times in the last 







From:        
"Madhu Rajanna" <mrajanna@xxxxxxxxxx>
To:        
"Shaik Salam" <shaik.salam@xxxxxxx>
Cc:        
"gluster-users@xxxxxxxxxxx List" <gluster-users@xxxxxxxxxxx>, "Michael Adam" <madam@xxxxxxxxxx>
Date:        
01/24/2019 12:51 PM
Subject:        
Re: Failed to provision volume with StorageClass "glusterfs-storage": glusterfs: server busy




"External email. Open with Caution"

HI Shaik,

   can you provide me the outpout of
$heketi-cli server operations info from heketi pod

as a workround you can try restarting the heketi pod. This will cause the current  operations to go stale, but other pending pvcs may go to Bound state

Regards,

Madhu R

On Thu, Jan 24, 2019 at 12:36 PM Shaik Salam <
shaik.salam@xxxxxxx> wrote:
H Madhu,


Could you please have look my issue If you have time (atleast workaround).

I am unable to send mail to "J
ohn Mulligan" <John_Mulligan@xxxxxxxxxx>" who is currently handling issue
https://bugzilla.redhat.com/show_bug.cgi?id=1636912

BR

Salam



From:        
Shaik Salam/HYD/TCS
To:        
"John Mulligan" <John_Mulligan@xxxxxxxxxx>, "Michael Adam" <madam@xxxxxxxxxx>, "Madhu Rajanna" <mrajanna@xxxxxxxxxx>
Cc:        
"gluster-users@xxxxxxxxxxx List" <gluster-users@xxxxxxxxxxx>
Date:        
01/24/2019 12:21 PM
Subject:        
Failed to provision volume with StorageClass "glusterfs-storage": glusterfs: server busy




 


Hi All,


We are facing also following issue on openshift origin while we are creating pvc for pods.
 (atlease provide workaround to move further)

Failed to provision volume with StorageClass "glusterfs-storage": glusterfs: create volume err: error creating volume
Failed to provision volume with StorageClass "glusterfs-storage": glusterfs: create volume err: error creating volume Server busy. Retry operation later..


Please find heketidb dump and log


[negroni] Completed 429 Too Many Requests in 250.763µs

[negroni] Started POST /volumes

[heketi] WARNING 2019/01/23 12:07:49 operations in-flight (8) exceeds limit (8)

[negroni] Completed 429 Too Many Requests in 169.08µs

[negroni] Started DELETE /volumes/520bc5f4e1bfd029855a72f9ca7ebf6c

[negroni] Completed 404 Not Found in 148.125µs

[negroni] Started POST /volumes

[heketi] WARNING 2019/01/23 12:08:04 operations in-flight (8) exceeds limit (8)

[negroni] Completed 429 Too Many Requests in 496.624µs

[negroni] Started POST /volumes

[heketi] WARNING 2019/01/23 12:08:04 operations in-flight (8) exceeds limit (8)

[negroni] Completed 429 Too Many Requests in 101.673µs

[negroni] Started POST /volumes

[heketi] WARNING 2019/01/23 12:08:19 operations in-flight (8) exceeds limit (8)

[negroni] Completed 429 Too Many Requests in 209.681µs

[negroni] Started POST /volumes

[heketi] WARNING 2019/01/23 12:08:19 operations in-flight (8) exceeds limit (8)

[negroni] Completed 429 Too Many Requests in 103.595µs

[negroni] Started POST /volumes

[heketi] WARNING 2019/01/23 12:08:34 operations in-flight (8) exceeds limit (8)

[negroni] Completed 429 Too Many Requests in 297.594µs

[negroni] Started POST /volumes

[heketi] WARNING 2019/01/23 12:08:34 operations in-flight (8) exceeds limit (8)

[negroni] Completed 429 Too Many Requests in 96.75µs

[negroni] Started POST /volumes

[negroni] Started POST /volumes

[heketi] WARNING 2019/01/23 12:08:49 operations in-flight (8) exceeds limit (8)

[negroni] Completed 429 Too Many Requests in 477.007µs

[heketi] WARNING 2019/01/23 12:08:49 operations in-flight (8) exceeds limit (8)

[negroni] Completed 429 Too Many Requests in 165.38µs

[negroni] Started POST /volumes

[heketi] WARNING 2019/01/23 12:09:04 operations in-flight (8) exceeds limit (8)

[negroni] Completed 429 Too Many Requests in 488.253µs

[negroni] Started POST /volumes

[heketi] WARNING 2019/01/23 12:09:04 operations in-flight (8) exceeds limit (8)

[negroni] Completed 429 Too Many Requests in 171.836µs

[negroni] Started POST /volumes

[heketi] WARNING 2019/01/23 12:09:19 operations in-flight (8) exceeds limit (8)

[negroni] Completed 429 Too Many Requests in 208.59µs

[negroni] Started POST /volumes

[heketi] WARNING 2019/01/23 12:09:19 operations in-flight (8) exceeds limit (8)

[negroni] Completed 429 Too Many Requests in 125.141µs

[negroni] Started DELETE /volumes/99e87ecd0a816ac34ae5a04eabc1d606

[negroni] Completed 404 Not Found in 138.687µs

[negroni] Started POST /volumes



BR

Salam

=====-----=====-----=====
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you



--
Madhu Rajanna
Software Engineer

Red Hat Bangalore, India
mrajanna@xxxxxxxxxx    M: +91-9741133155    



--
Madhu Rajanna
Software Engineer

Red Hat Bangalore, India
mrajanna@xxxxxxxxxx    M: +91-9741133155    



--
Madhu Rajanna
Software Engineer
Red Hat Bangalore, India
mrajanna@xxxxxxxxxx    M: +91-9741133155    

sh-4.2# cat /var/log/glusterfs/glusterd.log
[2019-01-21 14:31:10.000875] I [MSGID: 106488] [glusterd-handler.c:1549:__glusterd_handle_cli_get_volume] 0-management: Received get vol req
[2019-01-21 15:38:04.559975] I [MSGID: 106004] [glusterd-handler.c:6382:__glusterd_peer_rpc_notify] 0-management: Peer <192.168.89.219> (<65c42108-b5f4-4dfa-a161-fe6e76b0895a>), in state <Peer in Cluster>, has disconnected from glusterd.
[2019-01-21 15:38:04.560370] W [glusterd-locks.c:845:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2430a) [0x7f057c9da30a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2e540) [0x7f057c9e4540] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe8553) [0x7f057ca9e553] ) 0-management: Lock for vol glusterfs-registry-volume not held
[2019-01-21 15:38:04.560417] W [MSGID: 106117] [glusterd-handler.c:6407:__glusterd_peer_rpc_notify] 0-management: Lock not released for glusterfs-registry-volume
[2019-01-21 15:38:04.560432] W [glusterd-locks.c:845:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2430a) [0x7f057c9da30a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2e540) [0x7f057c9e4540] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe8553) [0x7f057ca9e553] ) 0-management: Lock for vol heketidbstorage not held
[2019-01-21 15:38:04.560439] W [MSGID: 106117] [glusterd-handler.c:6407:__glusterd_peer_rpc_notify] 0-management: Lock not released for heketidbstorage
[2019-01-21 15:38:04.560451] W [glusterd-locks.c:845:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2430a) [0x7f057c9da30a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2e540) [0x7f057c9e4540] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe8553) [0x7f057ca9e553] ) 0-management: Lock for vol vol_2e2e6b9cf174901b370ea79a266c651b not held
[2019-01-21 15:38:04.560458] W [MSGID: 106117] [glusterd-handler.c:6407:__glusterd_peer_rpc_notify] 0-management: Lock not released for vol_2e2e6b9cf174901b370ea79a266c651b
[2019-01-21 15:38:04.560471] W [glusterd-locks.c:845:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2430a) [0x7f057c9da30a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2e540) [0x7f057c9e4540] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe8553) [0x7f057ca9e553] ) 0-management: Lock for vol vol_2e6ad6aa03f7fe219807cb135ca1c766 not held
[2019-01-21 15:38:04.560478] W [MSGID: 106117] [glusterd-handler.c:6407:__glusterd_peer_rpc_notify] 0-management: Lock not released for vol_2e6ad6aa03f7fe219807cb135ca1c766
[2019-01-21 15:38:04.560491] W [glusterd-locks.c:845:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2430a) [0x7f057c9da30a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2e540) [0x7f057c9e4540] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe8553) [0x7f057ca9e553] ) 0-management: Lock for vol vol_47c2af6849ac9ad7d7fbc897ae3ae80c not held
[2019-01-21 15:38:04.560497] W [MSGID: 106117] [glusterd-handler.c:6407:__glusterd_peer_rpc_notify] 0-management: Lock not released for vol_47c2af6849ac9ad7d7fbc897ae3ae80c
[2019-01-21 15:38:04.560509] W [glusterd-locks.c:845:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2430a) [0x7f057c9da30a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2e540) [0x7f057c9e4540] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe8553) [0x7f057ca9e553] ) 0-management: Lock for vol vol_5101331e9d33a5d04adab92837b9d5ad not held
[2019-01-21 15:38:04.560516] W [MSGID: 106117] [glusterd-handler.c:6407:__glusterd_peer_rpc_notify] 0-management: Lock not released for vol_5101331e9d33a5d04adab92837b9d5ad
[2019-01-21 15:38:04.560528] W [glusterd-locks.c:845:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2430a) [0x7f057c9da30a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2e540) [0x7f057c9e4540] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe8553) [0x7f057ca9e553] ) 0-management: Lock for vol vol_62905ab0e959e7663f4576501c0c9b69 not held
[2019-01-21 15:38:04.560543] W [MSGID: 106117] [glusterd-handler.c:6407:__glusterd_peer_rpc_notify] 0-management: Lock not released for vol_62905ab0e959e7663f4576501c0c9b69
[2019-01-21 15:38:04.560555] W [glusterd-locks.c:845:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2430a) [0x7f057c9da30a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2e540) [0x7f057c9e4540] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe8553) [0x7f057ca9e553] ) 0-management: Lock for vol vol_63428c902b5d577a7739eb66a050a420 not held
[2019-01-21 15:38:04.560561] W [MSGID: 106117] [glusterd-handler.c:6407:__glusterd_peer_rpc_notify] 0-management: Lock not released for vol_63428c902b5d577a7739eb66a050a420
[2019-01-21 15:38:04.560573] W [glusterd-locks.c:845:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2430a) [0x7f057c9da30a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2e540) [0x7f057c9e4540] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe8553) [0x7f057ca9e553] ) 0-management: Lock for vol vol_78d426889b7bd661f3d6c6f7814b6d4f not held
[2019-01-21 15:38:04.560594] W [MSGID: 106117] [glusterd-handler.c:6407:__glusterd_peer_rpc_notify] 0-management: Lock not released for vol_78d426889b7bd661f3d6c6f7814b6d4f
[2019-01-21 15:38:04.560608] W [glusterd-locks.c:845:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2430a) [0x7f057c9da30a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2e540) [0x7f057c9e4540] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe8553) [0x7f057ca9e553] ) 0-management: Lock for vol vol_7e73769a754d7f9f6213ef3b4551af0e not held
[2019-01-21 15:38:04.560637] W [MSGID: 106117] [glusterd-handler.c:6407:__glusterd_peer_rpc_notify] 0-management: Lock not released for vol_7e73769a754d7f9f6213ef3b4551af0e
[2019-01-21 15:38:04.560649] W [glusterd-locks.c:845:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2430a) [0x7f057c9da30a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2e540) [0x7f057c9e4540] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe8553) [0x7f057ca9e553] ) 0-management: Lock for vol vol_88fbf36bf87106e549dca765f171cf69 not held
[2019-01-21 15:38:04.560655] W [MSGID: 106117] [glusterd-handler.c:6407:__glusterd_peer_rpc_notify] 0-management: Lock not released for vol_88fbf36bf87106e549dca765f171cf69
[2019-01-21 15:38:04.560667] W [glusterd-locks.c:845:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2430a) [0x7f057c9da30a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2e540) [0x7f057c9e4540] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe8553) [0x7f057ca9e553] ) 0-management: Lock for vol vol_937aa673e0d60126082bbd8c1589e383 not held
[2019-01-21 15:38:04.560673] W [MSGID: 106117] [glusterd-handler.c:6407:__glusterd_peer_rpc_notify] 0-management: Lock not released for vol_937aa673e0d60126082bbd8c1589e383
[2019-01-21 15:38:04.560685] W [glusterd-locks.c:845:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2430a) [0x7f057c9da30a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2e540) [0x7f057c9e4540] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe8553) [0x7f057ca9e553] ) 0-management: Lock for vol vol_9f04d82be19e2ea8ee80deb9098cd390 not held
[2019-01-21 15:38:04.560691] W [MSGID: 106117] [glusterd-handler.c:6407:__glusterd_peer_rpc_notify] 0-management: Lock not released for vol_9f04d82be19e2ea8ee80deb9098cd390
[2019-01-21 15:38:04.560702] W [glusterd-locks.c:845:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2430a) [0x7f057c9da30a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2e540) [0x7f057c9e4540] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe8553) [0x7f057ca9e553] ) 0-management: Lock for vol vol_bc5012fd39c5b3ab958b9da4b0256d3a not held
[2019-01-21 15:38:04.560709] W [MSGID: 106117] [glusterd-handler.c:6407:__glusterd_peer_rpc_notify] 0-management: Lock not released for vol_bc5012fd39c5b3ab958b9da4b0256d3a
[2019-01-21 15:38:04.560724] W [glusterd-locks.c:845:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2430a) [0x7f057c9da30a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2e540) [0x7f057c9e4540] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe8553) [0x7f057ca9e553] ) 0-management: Lock for vol vol_c5fa076446479cf414397591c0af1c7f not held
[2019-01-21 15:38:04.560731] W [MSGID: 106117] [glusterd-handler.c:6407:__glusterd_peer_rpc_notify] 0-management: Lock not released for vol_c5fa076446479cf414397591c0af1c7f
[2019-01-21 15:38:04.560742] W [glusterd-locks.c:845:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2430a) [0x7f057c9da30a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2e540) [0x7f057c9e4540] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe8553) [0x7f057ca9e553] ) 0-management: Lock for vol vol_dbc601f7162df326784aea34c4ebe8f2 not held
[2019-01-21 15:38:04.560748] W [MSGID: 106117] [glusterd-handler.c:6407:__glusterd_peer_rpc_notify] 0-management: Lock not released for vol_dbc601f7162df326784aea34c4ebe8f2
[2019-01-21 15:38:04.560760] W [glusterd-locks.c:845:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2430a) [0x7f057c9da30a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2e540) [0x7f057c9e4540] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe8553) [0x7f057ca9e553] ) 0-management: Lock for vol vol_dcee4f7f020d47e58fb48612bbba19d1 not held
[2019-01-21 15:38:04.560766] W [MSGID: 106117] [glusterd-handler.c:6407:__glusterd_peer_rpc_notify] 0-management: Lock not released for vol_dcee4f7f020d47e58fb48612bbba19d1
[2019-01-21 15:38:05.835994] I [MSGID: 106163] [glusterd-handshake.c:1356:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 40100
[2019-01-21 15:38:05.850979] I [MSGID: 106490] [glusterd-handler.c:2548:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: 65c42108-b5f4-4dfa-a161-fe6e76b0895a
[2019-01-21 15:38:15.528341] I [MSGID: 106493] [glusterd-handler.c:3811:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to 192.168.89.219 (0), ret: 0, op_ret: 0
[2019-01-21 15:38:18.844574] I [MSGID: 106492] [glusterd-handler.c:2726:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 65c42108-b5f4-4dfa-a161-fe6e76b0895a
[2019-01-21 15:38:18.851724] I [MSGID: 106502] [glusterd-handler.c:2771:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2019-01-21 15:38:18.876828] I [MSGID: 106493] [glusterd-rpc-ops.c:702:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 65c42108-b5f4-4dfa-a161-fe6e76b0895a
[2019-01-21 15:38:18.972525] I [MSGID: 106493] [glusterd-rpc-ops.c:486:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: 65c42108-b5f4-4dfa-a161-fe6e76b0895a, host: 192.168.89.219, port: 0
[2019-01-21 15:38:18.984500] I [MSGID: 106492] [glusterd-handler.c:2726:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 65c42108-b5f4-4dfa-a161-fe6e76b0895a
[2019-01-21 15:38:19.005235] I [MSGID: 106502] [glusterd-handler.c:2771:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2019-01-21 15:38:19.008017] I [MSGID: 106493] [glusterd-rpc-ops.c:702:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 65c42108-b5f4-4dfa-a161-fe6e76b0895a
[2019-01-24 13:23:40.165657] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume glusterfs-registry-volume
[2019-01-24 13:23:40.170259] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume heketidbstorage
[2019-01-24 13:23:40.173704] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_2e2e6b9cf174901b370ea79a266c651b
[2019-01-24 13:23:40.176981] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_2e6ad6aa03f7fe219807cb135ca1c766
[2019-01-24 13:23:40.180569] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_47c2af6849ac9ad7d7fbc897ae3ae80c
[2019-01-24 13:23:40.183590] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_5101331e9d33a5d04adab92837b9d5ad
[2019-01-24 13:23:40.186087] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_62905ab0e959e7663f4576501c0c9b69
[2019-01-24 13:23:40.190175] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_63428c902b5d577a7739eb66a050a420
[2019-01-24 13:23:40.193596] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_78d426889b7bd661f3d6c6f7814b6d4f
[2019-01-24 13:23:40.197530] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_7e73769a754d7f9f6213ef3b4551af0e
[2019-01-24 13:23:40.200663] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_88fbf36bf87106e549dca765f171cf69
[2019-01-24 13:23:40.204496] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_937aa673e0d60126082bbd8c1589e383
[2019-01-24 13:23:40.207126] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_9f04d82be19e2ea8ee80deb9098cd390
[2019-01-24 13:23:40.210834] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_bc5012fd39c5b3ab958b9da4b0256d3a
[2019-01-24 13:23:40.213406] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_c5fa076446479cf414397591c0af1c7f
[2019-01-24 13:23:40.216514] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_dbc601f7162df326784aea34c4ebe8f2
[2019-01-24 13:23:40.219476] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_dcee4f7f020d47e58fb48612bbba19d1
[2019-01-24 13:23:54.183266] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_2e6ad6aa03f7fe219807cb135ca1c766
[2019-01-24 13:24:02.689125] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_47c2af6849ac9ad7d7fbc897ae3ae80c
[2019-01-24 13:24:10.933527] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_5101331e9d33a5d04adab92837b9d5ad
[2019-01-24 13:24:14.706333] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume glusterfs-registry-volume
[2019-01-24 13:24:19.862557] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_62905ab0e959e7663f4576501c0c9b69
[2019-01-24 13:24:23.135543] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume heketidbstorage
[2019-01-24 13:24:28.643836] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_63428c902b5d577a7739eb66a050a420
The message "I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_2e2e6b9cf174901b370ea79a266c651b" repeated 2 times between [2019-01-24 13:23:40.173704] and [2019-01-24 13:24:33.267575]
[2019-01-24 13:24:40.354196] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_78d426889b7bd661f3d6c6f7814b6d4f
[2019-01-24 13:24:43.247244] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_dcee4f7f020d47e58fb48612bbba19d1
[2019-01-24 13:24:49.639209] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_7e73769a754d7f9f6213ef3b4551af0e
[2019-01-24 13:24:55.746577] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_2e6ad6aa03f7fe219807cb135ca1c766
[2019-01-24 13:25:05.446899] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_88fbf36bf87106e549dca765f171cf69
[2019-01-24 13:25:09.782082] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_47c2af6849ac9ad7d7fbc897ae3ae80c
[2019-01-24 13:25:19.766450] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_5101331e9d33a5d04adab92837b9d5ad
[2019-01-24 13:25:23.491994] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_937aa673e0d60126082bbd8c1589e383
[2019-01-24 13:25:30.964535] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_62905ab0e959e7663f4576501c0c9b69
[2019-01-24 13:25:37.543067] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_9f04d82be19e2ea8ee80deb9098cd390
[2019-01-24 13:25:41.302127] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_63428c902b5d577a7739eb66a050a420
[2019-01-24 13:25:47.991702] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_bc5012fd39c5b3ab958b9da4b0256d3a
[2019-01-24 13:25:51.509876] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_78d426889b7bd661f3d6c6f7814b6d4f
[2019-01-24 13:26:13.750166] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_88fbf36bf87106e549dca765f171cf69
[2019-01-24 13:26:22.469605] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_937aa673e0d60126082bbd8c1589e383
[2019-01-24 13:26:31.734250] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_9f04d82be19e2ea8ee80deb9098cd390
[2019-01-24 13:26:41.830052] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_bc5012fd39c5b3ab958b9da4b0256d3a
[2019-01-24 13:26:03.717515] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_7e73769a754d7f9f6213ef3b4551af0e
[2019-01-24 13:26:51.542274] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_c5fa076446479cf414397591c0af1c7f
[2019-01-24 13:27:01.142763] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_dbc601f7162df326784aea34c4ebe8f2
[2019-01-24 13:27:13.252008] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_dcee4f7f020d47e58fb48612bbba19d1
sh-4.2# ^C
sh-4.2# cat /var/log/glusterfs/glusterd.log
[2019-01-21 14:31:10.000875] I [MSGID: 106488] [glusterd-handler.c:1549:__glusterd_handle_cli_get_volume] 0-management: Received get vol req
[2019-01-21 15:38:04.559975] I [MSGID: 106004] [glusterd-handler.c:6382:__glusterd_peer_rpc_notify] 0-management: Peer <192.168.89.219> (<65c42108-b5f4-4dfa-a161-fe6e76b0895a>), in state <Peer in Cluster>, has disconnected from glusterd.
[2019-01-21 15:38:04.560370] W [glusterd-locks.c:845:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2430a) [0x7f057c9da30a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2e540) [0x7f057c9e4540] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe8553) [0x7f057ca9e553] ) 0-management: Lock for vol glusterfs-registry-volume not held
[2019-01-21 15:38:04.560417] W [MSGID: 106117] [glusterd-handler.c:6407:__glusterd_peer_rpc_notify] 0-management: Lock not released for glusterfs-registry-volume
[2019-01-21 15:38:04.560432] W [glusterd-locks.c:845:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2430a) [0x7f057c9da30a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2e540) [0x7f057c9e4540] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe8553) [0x7f057ca9e553] ) 0-management: Lock for vol heketidbstorage not held
[2019-01-21 15:38:04.560439] W [MSGID: 106117] [glusterd-handler.c:6407:__glusterd_peer_rpc_notify] 0-management: Lock not released for heketidbstorage
[2019-01-21 15:38:04.560451] W [glusterd-locks.c:845:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2430a) [0x7f057c9da30a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2e540) [0x7f057c9e4540] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe8553) [0x7f057ca9e553] ) 0-management: Lock for vol vol_2e2e6b9cf174901b370ea79a266c651b not held
[2019-01-21 15:38:04.560458] W [MSGID: 106117] [glusterd-handler.c:6407:__glusterd_peer_rpc_notify] 0-management: Lock not released for vol_2e2e6b9cf174901b370ea79a266c651b
[2019-01-21 15:38:04.560471] W [glusterd-locks.c:845:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2430a) [0x7f057c9da30a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2e540) [0x7f057c9e4540] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe8553) [0x7f057ca9e553] ) 0-management: Lock for vol vol_2e6ad6aa03f7fe219807cb135ca1c766 not held
[2019-01-21 15:38:04.560478] W [MSGID: 106117] [glusterd-handler.c:6407:__glusterd_peer_rpc_notify] 0-management: Lock not released for vol_2e6ad6aa03f7fe219807cb135ca1c766
[2019-01-21 15:38:04.560491] W [glusterd-locks.c:845:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2430a) [0x7f057c9da30a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2e540) [0x7f057c9e4540] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe8553) [0x7f057ca9e553] ) 0-management: Lock for vol vol_47c2af6849ac9ad7d7fbc897ae3ae80c not held
[2019-01-21 15:38:04.560497] W [MSGID: 106117] [glusterd-handler.c:6407:__glusterd_peer_rpc_notify] 0-management: Lock not released for vol_47c2af6849ac9ad7d7fbc897ae3ae80c
[2019-01-21 15:38:04.560509] W [glusterd-locks.c:845:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2430a) [0x7f057c9da30a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2e540) [0x7f057c9e4540] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe8553) [0x7f057ca9e553] ) 0-management: Lock for vol vol_5101331e9d33a5d04adab92837b9d5ad not held
[2019-01-21 15:38:04.560516] W [MSGID: 106117] [glusterd-handler.c:6407:__glusterd_peer_rpc_notify] 0-management: Lock not released for vol_5101331e9d33a5d04adab92837b9d5ad
[2019-01-21 15:38:04.560528] W [glusterd-locks.c:845:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2430a) [0x7f057c9da30a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2e540) [0x7f057c9e4540] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe8553) [0x7f057ca9e553] ) 0-management: Lock for vol vol_62905ab0e959e7663f4576501c0c9b69 not held
[2019-01-21 15:38:04.560543] W [MSGID: 106117] [glusterd-handler.c:6407:__glusterd_peer_rpc_notify] 0-management: Lock not released for vol_62905ab0e959e7663f4576501c0c9b69
[2019-01-21 15:38:04.560555] W [glusterd-locks.c:845:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2430a) [0x7f057c9da30a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2e540) [0x7f057c9e4540] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe8553) [0x7f057ca9e553] ) 0-management: Lock for vol vol_63428c902b5d577a7739eb66a050a420 not held
[2019-01-21 15:38:04.560561] W [MSGID: 106117] [glusterd-handler.c:6407:__glusterd_peer_rpc_notify] 0-management: Lock not released for vol_63428c902b5d577a7739eb66a050a420
[2019-01-21 15:38:04.560573] W [glusterd-locks.c:845:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2430a) [0x7f057c9da30a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2e540) [0x7f057c9e4540] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe8553) [0x7f057ca9e553] ) 0-management: Lock for vol vol_78d426889b7bd661f3d6c6f7814b6d4f not held
[2019-01-21 15:38:04.560594] W [MSGID: 106117] [glusterd-handler.c:6407:__glusterd_peer_rpc_notify] 0-management: Lock not released for vol_78d426889b7bd661f3d6c6f7814b6d4f
[2019-01-21 15:38:04.560608] W [glusterd-locks.c:845:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2430a) [0x7f057c9da30a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2e540) [0x7f057c9e4540] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe8553) [0x7f057ca9e553] ) 0-management: Lock for vol vol_7e73769a754d7f9f6213ef3b4551af0e not held
[2019-01-21 15:38:04.560637] W [MSGID: 106117] [glusterd-handler.c:6407:__glusterd_peer_rpc_notify] 0-management: Lock not released for vol_7e73769a754d7f9f6213ef3b4551af0e
[2019-01-21 15:38:04.560649] W [glusterd-locks.c:845:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2430a) [0x7f057c9da30a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2e540) [0x7f057c9e4540] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe8553) [0x7f057ca9e553] ) 0-management: Lock for vol vol_88fbf36bf87106e549dca765f171cf69 not held
[2019-01-21 15:38:04.560655] W [MSGID: 106117] [glusterd-handler.c:6407:__glusterd_peer_rpc_notify] 0-management: Lock not released for vol_88fbf36bf87106e549dca765f171cf69
[2019-01-21 15:38:04.560667] W [glusterd-locks.c:845:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2430a) [0x7f057c9da30a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2e540) [0x7f057c9e4540] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe8553) [0x7f057ca9e553] ) 0-management: Lock for vol vol_937aa673e0d60126082bbd8c1589e383 not held
[2019-01-21 15:38:04.560673] W [MSGID: 106117] [glusterd-handler.c:6407:__glusterd_peer_rpc_notify] 0-management: Lock not released for vol_937aa673e0d60126082bbd8c1589e383
[2019-01-21 15:38:04.560685] W [glusterd-locks.c:845:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2430a) [0x7f057c9da30a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2e540) [0x7f057c9e4540] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe8553) [0x7f057ca9e553] ) 0-management: Lock for vol vol_9f04d82be19e2ea8ee80deb9098cd390 not held
[2019-01-21 15:38:04.560691] W [MSGID: 106117] [glusterd-handler.c:6407:__glusterd_peer_rpc_notify] 0-management: Lock not released for vol_9f04d82be19e2ea8ee80deb9098cd390
[2019-01-21 15:38:04.560702] W [glusterd-locks.c:845:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2430a) [0x7f057c9da30a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2e540) [0x7f057c9e4540] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe8553) [0x7f057ca9e553] ) 0-management: Lock for vol vol_bc5012fd39c5b3ab958b9da4b0256d3a not held
[2019-01-21 15:38:04.560709] W [MSGID: 106117] [glusterd-handler.c:6407:__glusterd_peer_rpc_notify] 0-management: Lock not released for vol_bc5012fd39c5b3ab958b9da4b0256d3a
[2019-01-21 15:38:04.560724] W [glusterd-locks.c:845:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2430a) [0x7f057c9da30a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2e540) [0x7f057c9e4540] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe8553) [0x7f057ca9e553] ) 0-management: Lock for vol vol_c5fa076446479cf414397591c0af1c7f not held
[2019-01-21 15:38:04.560731] W [MSGID: 106117] [glusterd-handler.c:6407:__glusterd_peer_rpc_notify] 0-management: Lock not released for vol_c5fa076446479cf414397591c0af1c7f
[2019-01-21 15:38:04.560742] W [glusterd-locks.c:845:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2430a) [0x7f057c9da30a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2e540) [0x7f057c9e4540] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe8553) [0x7f057ca9e553] ) 0-management: Lock for vol vol_dbc601f7162df326784aea34c4ebe8f2 not held
[2019-01-21 15:38:04.560748] W [MSGID: 106117] [glusterd-handler.c:6407:__glusterd_peer_rpc_notify] 0-management: Lock not released for vol_dbc601f7162df326784aea34c4ebe8f2
[2019-01-21 15:38:04.560760] W [glusterd-locks.c:845:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2430a) [0x7f057c9da30a] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0x2e540) [0x7f057c9e4540] -->/usr/lib64/glusterfs/4.1.5/xlator/mgmt/glusterd.so(+0xe8553) [0x7f057ca9e553] ) 0-management: Lock for vol vol_dcee4f7f020d47e58fb48612bbba19d1 not held
[2019-01-21 15:38:04.560766] W [MSGID: 106117] [glusterd-handler.c:6407:__glusterd_peer_rpc_notify] 0-management: Lock not released for vol_dcee4f7f020d47e58fb48612bbba19d1
[2019-01-21 15:38:05.835994] I [MSGID: 106163] [glusterd-handshake.c:1356:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 40100
[2019-01-21 15:38:05.850979] I [MSGID: 106490] [glusterd-handler.c:2548:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: 65c42108-b5f4-4dfa-a161-fe6e76b0895a
[2019-01-21 15:38:15.528341] I [MSGID: 106493] [glusterd-handler.c:3811:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to 192.168.89.219 (0), ret: 0, op_ret: 0
[2019-01-21 15:38:18.844574] I [MSGID: 106492] [glusterd-handler.c:2726:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 65c42108-b5f4-4dfa-a161-fe6e76b0895a
[2019-01-21 15:38:18.851724] I [MSGID: 106502] [glusterd-handler.c:2771:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2019-01-21 15:38:18.876828] I [MSGID: 106493] [glusterd-rpc-ops.c:702:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 65c42108-b5f4-4dfa-a161-fe6e76b0895a
[2019-01-21 15:38:18.972525] I [MSGID: 106493] [glusterd-rpc-ops.c:486:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: 65c42108-b5f4-4dfa-a161-fe6e76b0895a, host: 192.168.89.219, port: 0
[2019-01-21 15:38:18.984500] I [MSGID: 106492] [glusterd-handler.c:2726:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 65c42108-b5f4-4dfa-a161-fe6e76b0895a
[2019-01-21 15:38:19.005235] I [MSGID: 106502] [glusterd-handler.c:2771:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2019-01-21 15:38:19.008017] I [MSGID: 106493] [glusterd-rpc-ops.c:702:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 65c42108-b5f4-4dfa-a161-fe6e76b0895a
[2019-01-24 13:23:40.165657] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume glusterfs-registry-volume
[2019-01-24 13:23:40.170259] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume heketidbstorage
[2019-01-24 13:23:40.173704] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_2e2e6b9cf174901b370ea79a266c651b
[2019-01-24 13:23:40.176981] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_2e6ad6aa03f7fe219807cb135ca1c766
[2019-01-24 13:23:40.180569] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_47c2af6849ac9ad7d7fbc897ae3ae80c
[2019-01-24 13:23:40.183590] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_5101331e9d33a5d04adab92837b9d5ad
[2019-01-24 13:23:40.186087] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_62905ab0e959e7663f4576501c0c9b69
[2019-01-24 13:23:40.190175] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_63428c902b5d577a7739eb66a050a420
[2019-01-24 13:23:40.193596] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_78d426889b7bd661f3d6c6f7814b6d4f
[2019-01-24 13:23:40.197530] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_7e73769a754d7f9f6213ef3b4551af0e
[2019-01-24 13:23:40.200663] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_88fbf36bf87106e549dca765f171cf69
[2019-01-24 13:23:40.204496] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_937aa673e0d60126082bbd8c1589e383
[2019-01-24 13:23:40.207126] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_9f04d82be19e2ea8ee80deb9098cd390
[2019-01-24 13:23:40.210834] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_bc5012fd39c5b3ab958b9da4b0256d3a
[2019-01-24 13:23:40.213406] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_c5fa076446479cf414397591c0af1c7f
[2019-01-24 13:23:40.216514] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_dbc601f7162df326784aea34c4ebe8f2
[2019-01-24 13:23:40.219476] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_dcee4f7f020d47e58fb48612bbba19d1
[2019-01-24 13:23:54.183266] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_2e6ad6aa03f7fe219807cb135ca1c766
[2019-01-24 13:24:02.689125] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_47c2af6849ac9ad7d7fbc897ae3ae80c
[2019-01-24 13:24:10.933527] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_5101331e9d33a5d04adab92837b9d5ad
[2019-01-24 13:24:14.706333] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume glusterfs-registry-volume
[2019-01-24 13:24:19.862557] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_62905ab0e959e7663f4576501c0c9b69
[2019-01-24 13:24:23.135543] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume heketidbstorage
[2019-01-24 13:24:28.643836] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_63428c902b5d577a7739eb66a050a420
The message "I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_2e2e6b9cf174901b370ea79a266c651b" repeated 2 times between [2019-01-24 13:23:40.173704] and [2019-01-24 13:24:33.267575]
[2019-01-24 13:24:40.354196] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_78d426889b7bd661f3d6c6f7814b6d4f
[2019-01-24 13:24:43.247244] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_dcee4f7f020d47e58fb48612bbba19d1
[2019-01-24 13:24:49.639209] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_7e73769a754d7f9f6213ef3b4551af0e
[2019-01-24 13:24:55.746577] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_2e6ad6aa03f7fe219807cb135ca1c766
[2019-01-24 13:25:05.446899] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_88fbf36bf87106e549dca765f171cf69
[2019-01-24 13:25:09.782082] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_47c2af6849ac9ad7d7fbc897ae3ae80c
[2019-01-24 13:25:19.766450] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_5101331e9d33a5d04adab92837b9d5ad
[2019-01-24 13:25:23.491994] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_937aa673e0d60126082bbd8c1589e383
[2019-01-24 13:25:30.964535] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_62905ab0e959e7663f4576501c0c9b69
[2019-01-24 13:25:37.543067] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_9f04d82be19e2ea8ee80deb9098cd390
[2019-01-24 13:25:41.302127] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_63428c902b5d577a7739eb66a050a420
[2019-01-24 13:25:47.991702] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_bc5012fd39c5b3ab958b9da4b0256d3a
[2019-01-24 13:25:51.509876] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_78d426889b7bd661f3d6c6f7814b6d4f
[2019-01-24 13:26:13.750166] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_88fbf36bf87106e549dca765f171cf69
[2019-01-24 13:26:22.469605] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_937aa673e0d60126082bbd8c1589e383
[2019-01-24 13:26:31.734250] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_9f04d82be19e2ea8ee80deb9098cd390
[2019-01-24 13:26:41.830052] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_bc5012fd39c5b3ab958b9da4b0256d3a
[2019-01-24 13:26:03.717515] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_7e73769a754d7f9f6213ef3b4551af0e
[2019-01-24 13:26:51.542274] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_c5fa076446479cf414397591c0af1c7f
[2019-01-24 13:27:01.142763] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_dbc601f7162df326784aea34c4ebe8f2
[2019-01-24 13:27:13.252008] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_dcee4f7f020d47e58fb48612bbba19d1
sh-4.2# ^C
sh-4.2# exit
[2019-01-21 10:06:19.319668] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_47c2af6849ac9ad7d7fbc897ae3ae80c
[2019-01-21 10:06:26.982268] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_5101331e9d33a5d04adab92837b9d5ad
[2019-01-21 10:06:36.841964] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_62905ab0e959e7663f4576501c0c9b69
[2019-01-21 10:06:45.498278] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_63428c902b5d577a7739eb66a050a420
[2019-01-21 10:06:56.695246] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_78d426889b7bd661f3d6c6f7814b6d4f
[2019-01-21 10:07:07.127836] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_7e73769a754d7f9f6213ef3b4551af0e
[2019-01-21 10:07:16.632875] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_88fbf36bf87106e549dca765f171cf69
[2019-01-21 10:07:27.240209] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_937aa673e0d60126082bbd8c1589e383
[2019-01-21 10:23:16.721036] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_9f04d82be19e2ea8ee80deb9098cd390
[2019-01-21 10:23:32.342652] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_bc5012fd39c5b3ab958b9da4b0256d3a
[2019-01-21 10:23:44.784146] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_c5fa076446479cf414397591c0af1c7f
[2019-01-21 10:23:58.113167] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume vol_dbc601f7162df326784aea34c4ebe8f2
[2019-01-21 15:36:59.948502] I [MSGID: 106487] [glusterd-handler.c:1486:__glusterd_handle_cli_list_friends] 0-glusterd: Received cli list req
[2019-01-21 15:38:04.546366] W [glusterfsd.c:1514:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7e25) [0x7f48c0199e25] -->/usr/sbin/glusterd(glusterfs_sigwaiter+0xe5) [0x7f48c184cd65] -->/usr/sbin/glusterd(cleanup_and_exit+0x6b) [0x7f48c184cb8b] ) 0-: received signum (15), shutting down
[2019-01-21 15:38:04.564188] I [MSGID: 100030] [glusterfsd.c:2741:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 4.1.5 (args: /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO)
[2019-01-21 15:38:04.567899] I [MSGID: 106478] [glusterd.c:1423:init] 0-management: Maximum allowed open file descriptors set to 65536
[2019-01-21 15:38:04.567931] I [MSGID: 106479] [glusterd.c:1481:init] 0-management: Using /var/lib/glusterd as working directory
[2019-01-21 15:38:04.567937] I [MSGID: 106479] [glusterd.c:1486:init] 0-management: Using /var/run/gluster as pid file working directory
[2019-01-21 15:38:04.572111] W [MSGID: 103071] [rdma.c:4629:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event channel creation failed [No such device]
[2019-01-21 15:38:04.572127] W [MSGID: 103055] [rdma.c:4938:init] 0-rdma.management: Failed to initialize IB Device
[2019-01-21 15:38:04.572133] W [rpc-transport.c:351:rpc_transport_load] 0-rpc-transport: 'rdma' initialization failed
[2019-01-21 15:38:04.572194] W [rpcsvc.c:1781:rpcsvc_create_listener] 0-rpc-service: cannot create listener, initing the transport failed
[2019-01-21 15:38:04.572200] E [MSGID: 106244] [glusterd.c:1764:init] 0-management: creation of 1 listeners failed, continuing with succeeded transport
[2019-01-21 15:38:05.404713] I [MSGID: 106513] [glusterd-store.c:2240:glusterd_restore_op_version] 0-glusterd: retrieved op-version: 40100
[2019-01-21 15:38:05.405414] I [MSGID: 106544] [glusterd.c:158:glusterd_uuid_init] 0-management: retrieved UUID: 65c42108-b5f4-4dfa-a161-fe6e76b0895a
[2019-01-21 15:38:05.828632] I [MSGID: 106498] [glusterd-handler.c:3614:glusterd_friend_add_from_peerinfo] 0-management: connect returned 0
[2019-01-21 15:38:05.828724] I [MSGID: 106498] [glusterd-handler.c:3614:glusterd_friend_add_from_peerinfo] 0-management: connect returned 0
[2019-01-21 15:38:05.828758] W [MSGID: 106061] [glusterd-handler.c:3408:glusterd_transport_inet_options_build] 0-glusterd: Failed to get tcp-user-timeout
[2019-01-21 15:38:05.828778] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2019-01-21 15:38:05.833085] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
Final graph:
+------------------------------------------------------------------------------+
  1: volume management
  2:     type mgmt/glusterd
  3:     option rpc-auth.auth-glusterfs on
  4:     option rpc-auth.auth-unix on
  5:     option rpc-auth.auth-null on
  6:     option rpc-auth-allow-insecure on
  7:     option transport.listen-backlog 10
  8:     option event-threads 1
  9:     option ping-timeout 0
 10:     option transport.socket.read-fail-log off
 11:     option transport.socket.keepalive-interval 2
 12:     option transport.socket.keepalive-time 10
 13:     option transport-type rdma
 14:     option working-directory /var/lib/glusterd
 15: end-volume
 16:
+------------------------------------------------------------------------------+
[2019-01-21 15:38:05.833076] W [MSGID: 106061] [glusterd-handler.c:3408:glusterd_transport_inet_options_build] 0-glusterd: Failed to get tcp-user-timeout
[2019-01-21 15:38:05.835309] I [MSGID: 101190] [event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2019-01-21 15:38:14.600599] I [MSGID: 106142] [glusterd-pmap.c:297:pmap_registry_bind] 0-pmap: adding brick /var/lib/heketi/mounts/vg_ae7c5467f294a953bfb274c1e6afc26d/brick_c4b582ef0712e8c2d750c97ff443e47c/brick on port 49166
[2019-01-21 15:38:14.607940] I [MSGID: 106142] [glusterd-pmap.c:297:pmap_registry_bind] 0-pmap: adding brick /var/lib/heketi/mounts/vg_339b50a1c21d42f39d5568dcfbbf8844/brick_1f69ed2975c20811515f054a74669be6/brick on port 49158
[2019-01-21 15:38:14.745158] I [MSGID: 106142] [glusterd-pmap.c:297:pmap_registry_bind] 0-pmap: adding brick /var/lib/heketi/mounts/vg_339b50a1c21d42f39d5568dcfbbf8844/brick_d343f2a8b3c19bda521f93d6327451f2/brick on port 49157
[2019-01-21 15:38:14.924969] I [MSGID: 106142] [glusterd-pmap.c:297:pmap_registry_bind] 0-pmap: adding brick /var/lib/heketi/mounts/vg_339b50a1c21d42f39d5568dcfbbf8844/brick_b85460c3ba53bb7db5ab712f7b684c51/brick on port 49155
[2019-01-21 15:38:14.932438] I [MSGID: 106142] [glusterd-pmap.c:297:pmap_registry_bind] 0-pmap: adding brick /var/lib/heketi/mounts/vg_06f12f2c23a257117083a6d86d6f4087/brick_28b3599a579db6e987f46ab0120f9d8c/brick on port 49164
[2019-01-21 15:38:15.052898] I [MSGID: 106142] [glusterd-pmap.c:297:pmap_registry_bind] 0-pmap: adding brick /var/lib/heketi/mounts/vg_06f12f2c23a257117083a6d86d6f4087/brick_413e08918a40ce6806dd6c2711d923f3/brick on port 49159
[2019-01-21 15:38:15.123077] I [MSGID: 106142] [glusterd-pmap.c:297:pmap_registry_bind] 0-pmap: adding brick /var/lib/heketi/mounts/vg_06f12f2c23a257117083a6d86d6f4087/brick_ac26b8e8306405623200599d05e87fe2/brick on port 49154
[2019-01-21 15:38:15.153882] I [MSGID: 106142] [glusterd-pmap.c:297:pmap_registry_bind] 0-pmap: adding brick /var/lib/heketi/mounts/vg_3343a86e75865dd02b054fb268781815/brick_9603a0a95036d6362ae2b38bdbb40428/brick on port 49165
[2019-01-21 15:38:15.198115] I [MSGID: 106142] [glusterd-pmap.c:297:pmap_registry_bind] 0-pmap: adding brick /var/lib/heketi/mounts/vg_06f12f2c23a257117083a6d86d6f4087/brick_fb2e1c7f95f9926166c585ca16ac2402/brick on port 49163
[2019-01-21 15:38:15.203657] I [MSGID: 106142] [glusterd-pmap.c:297:pmap_registry_bind] 0-pmap: adding brick /var/lib/heketi/mounts/vg_ae7c5467f294a953bfb274c1e6afc26d/brick_fd6a1a83004a11c0007e9392913aea8f/brick on port 49168
[2019-01-21 15:38:15.260264] I [MSGID: 106493] [glusterd-rpc-ops.c:486:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: 0574c4d2-6900-4447-a752-2bf7477b443e, host: app1.matrix.nokia.com, port: 0
[2019-01-21 15:38:15.268166] I [glusterd-utils.c:5994:glusterd_brick_start] 0-management: discovered already-running brick /var/lib/heketi/mounts/vg_339b50a1c21d42f39d5568dcfbbf8844/brick_b85460c3ba53bb7db5ab712f7b684c51/brick
[2019-01-21 15:38:15.268226] I [MSGID: 106142] [glusterd-pmap.c:297:pmap_registry_bind] 0-pmap: adding brick /var/lib/heketi/mounts/vg_339b50a1c21d42f39d5568dcfbbf8844/brick_b85460c3ba53bb7db5ab712f7b684c51/brick on port 49155
[2019-01-21 15:38:15.268302] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2019-01-21 15:38:15.268510] I [glusterd-utils.c:5994:glusterd_brick_start] 0-management: discovered already-running brick /var/lib/heketi/mounts/vg_06f12f2c23a257117083a6d86d6f4087/brick_ac26b8e8306405623200599d05e87fe2/brick
[2019-01-21 15:38:15.268519] I [MSGID: 106142] [glusterd-pmap.c:297:pmap_registry_bind] 0-pmap: adding brick /var/lib/heketi/mounts/vg_06f12f2c23a257117083a6d86d6f4087/brick_ac26b8e8306405623200599d05e87fe2/brick on port 49154
[2019-01-21 15:38:15.268536] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2019-01-21 15:38:15.268718] I [glusterd-utils.c:5994:glusterd_brick_start] 0-management: discovered already-running brick /var/lib/heketi/mounts/vg_3343a86e75865dd02b054fb268781815/brick_c607b4aed1593917939ba85df5eefaae/brick
[2019-01-21 15:38:15.268732] I [MSGID: 106142] [glusterd-pmap.c:297:pmap_registry_bind] 0-pmap: adding brick /var/lib/heketi/mounts/vg_3343a86e75865dd02b054fb268781815/brick_c607b4aed1593917939ba85df5eefaae/brick on port 49161
[2019-01-21 15:38:15.268748] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2019-01-21 15:38:15.268887] I [glusterd-utils.c:5994:glusterd_brick_start] 0-management: discovered already-running brick /var/lib/heketi/mounts/vg_06f12f2c23a257117083a6d86d6f4087/brick_fb2e1c7f95f9926166c585ca16ac2402/brick
[2019-01-21 15:38:15.268894] I [MSGID: 106142] [glusterd-pmap.c:297:pmap_registry_bind] 0-pmap: adding brick /var/lib/heketi/mounts/vg_06f12f2c23a257117083a6d86d6f4087/brick_fb2e1c7f95f9926166c585ca16ac2402/brick on port 49163
[2019-01-21 15:38:15.268909] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2019-01-21 15:38:15.269028] I [glusterd-utils.c:5994:glusterd_brick_start] 0-management: discovered already-running brick /var/lib/heketi/mounts/vg_339b50a1c21d42f39d5568dcfbbf8844/brick_1f69ed2975c20811515f054a74669be6/brick
[2019-01-21 15:38:15.269036] I [MSGID: 106142] [glusterd-pmap.c:297:pmap_registry_bind] 0-pmap: adding brick /var/lib/heketi/mounts/vg_339b50a1c21d42f39d5568dcfbbf8844/brick_1f69ed2975c20811515f054a74669be6/brick on port 49158
[2019-01-21 15:38:15.269050] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2019-01-21 15:38:15.269163] I [glusterd-utils.c:5994:glusterd_brick_start] 0-management: discovered already-running brick /var/lib/heketi/mounts/vg_06f12f2c23a257117083a6d86d6f4087/brick_413e08918a40ce6806dd6c2711d923f3/brick
[2019-01-21 15:38:15.269171] I [MSGID: 106142] [glusterd-pmap.c:297:pmap_registry_bind] 0-pmap: adding brick /var/lib/heketi/mounts/vg_06f12f2c23a257117083a6d86d6f4087/brick_413e08918a40ce6806dd6c2711d923f3/brick on port 49159
[2019-01-21 15:38:15.269189] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2019-01-21 15:38:15.269303] I [glusterd-utils.c:5994:glusterd_brick_start] 0-management: discovered already-running brick /var/lib/heketi/mounts/vg_339b50a1c21d42f39d5568dcfbbf8844/brick_d343f2a8b3c19bda521f93d6327451f2/brick
[2019-01-21 15:38:15.269310] I [MSGID: 106142] [glusterd-pmap.c:297:pmap_registry_bind] 0-pmap: adding brick /var/lib/heketi/mounts/vg_339b50a1c21d42f39d5568dcfbbf8844/brick_d343f2a8b3c19bda521f93d6327451f2/brick on port 49157
[2019-01-21 15:38:15.269328] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2019-01-21 15:38:15.269455] I [glusterd-utils.c:5994:glusterd_brick_start] 0-management: discovered already-running brick /var/lib/heketi/mounts/vg_ae7c5467f294a953bfb274c1e6afc26d/brick_3c16802735df66a463468ce8262ef0a6/brick
[2019-01-21 15:38:15.269462] I [MSGID: 106142] [glusterd-pmap.c:297:pmap_registry_bind] 0-pmap: adding brick /var/lib/heketi/mounts/vg_ae7c5467f294a953bfb274c1e6afc26d/brick_3c16802735df66a463468ce8262ef0a6/brick on port 49171
[2019-01-21 15:38:15.269477] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2019-01-21 15:38:15.269602] I [glusterd-utils.c:5994:glusterd_brick_start] 0-management: discovered already-running brick /var/lib/heketi/mounts/vg_06f12f2c23a257117083a6d86d6f4087/brick_28b3599a579db6e987f46ab0120f9d8c/brick
[2019-01-21 15:38:15.269611] I [MSGID: 106142] [glusterd-pmap.c:297:pmap_registry_bind] 0-pmap: adding brick /var/lib/heketi/mounts/vg_06f12f2c23a257117083a6d86d6f4087/brick_28b3599a579db6e987f46ab0120f9d8c/brick on port 49164
[2019-01-21 15:38:15.269628] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2019-01-21 15:38:15.269747] I [glusterd-utils.c:5994:glusterd_brick_start] 0-management: discovered already-running brick /var/lib/heketi/mounts/vg_3343a86e75865dd02b054fb268781815/brick_774b52d13a2cf996784181cf6d7db93c/brick
[2019-01-21 15:38:15.269753] I [MSGID: 106142] [glusterd-pmap.c:297:pmap_registry_bind] 0-pmap: adding brick /var/lib/heketi/mounts/vg_3343a86e75865dd02b054fb268781815/brick_774b52d13a2cf996784181cf6d7db93c/brick on port 49156
[2019-01-21 15:38:15.269768] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2019-01-21 15:38:15.269891] I [glusterd-utils.c:5994:glusterd_brick_start] 0-management: discovered already-running brick /var/lib/heketi/mounts/vg_ae7c5467f294a953bfb274c1e6afc26d/brick_e26c7a93f2a7b6d6455ab5fa7615dfb1/brick
[2019-01-21 15:38:15.269898] I [MSGID: 106142] [glusterd-pmap.c:297:pmap_registry_bind] 0-pmap: adding brick /var/lib/heketi/mounts/vg_ae7c5467f294a953bfb274c1e6afc26d/brick_e26c7a93f2a7b6d6455ab5fa7615dfb1/brick on port 49167
[2019-01-21 15:38:15.269912] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2019-01-21 15:38:15.270029] I [glusterd-utils.c:5994:glusterd_brick_start] 0-management: discovered already-running brick /var/lib/heketi/mounts/vg_3343a86e75865dd02b054fb268781815/brick_bb29a705b3f39522b131277d64761033/brick
[2019-01-21 15:38:15.270036] I [MSGID: 106142] [glusterd-pmap.c:297:pmap_registry_bind] 0-pmap: adding brick /var/lib/heketi/mounts/vg_3343a86e75865dd02b054fb268781815/brick_bb29a705b3f39522b131277d64761033/brick on port 49172
[2019-01-21 15:38:15.270051] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2019-01-21 15:38:15.270164] I [glusterd-utils.c:5994:glusterd_brick_start] 0-management: discovered already-running brick /var/lib/heketi/mounts/vg_3343a86e75865dd02b054fb268781815/brick_9603a0a95036d6362ae2b38bdbb40428/brick
[2019-01-21 15:38:15.270171] I [MSGID: 106142] [glusterd-pmap.c:297:pmap_registry_bind] 0-pmap: adding brick /var/lib/heketi/mounts/vg_3343a86e75865dd02b054fb268781815/brick_9603a0a95036d6362ae2b38bdbb40428/brick on port 49165
[2019-01-21 15:38:15.270185] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2019-01-21 15:38:15.270326] I [glusterd-utils.c:5994:glusterd_brick_start] 0-management: discovered already-running brick /var/lib/heketi/mounts/vg_ae7c5467f294a953bfb274c1e6afc26d/brick_c4b582ef0712e8c2d750c97ff443e47c/brick
[2019-01-21 15:38:15.270333] I [MSGID: 106142] [glusterd-pmap.c:297:pmap_registry_bind] 0-pmap: adding brick /var/lib/heketi/mounts/vg_ae7c5467f294a953bfb274c1e6afc26d/brick_c4b582ef0712e8c2d750c97ff443e47c/brick on port 49166
[2019-01-21 15:38:15.270350] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2019-01-21 15:38:15.270489] I [glusterd-utils.c:5994:glusterd_brick_start] 0-management: discovered already-running brick /var/lib/heketi/mounts/vg_ae7c5467f294a953bfb274c1e6afc26d/brick_93eb3caff9f179d522b5498b515bf991/brick
[2019-01-21 15:38:15.270495] I [MSGID: 106142] [glusterd-pmap.c:297:pmap_registry_bind] 0-pmap: adding brick /var/lib/heketi/mounts/vg_ae7c5467f294a953bfb274c1e6afc26d/brick_93eb3caff9f179d522b5498b515bf991/brick on port 49162
[2019-01-21 15:38:15.270510] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2019-01-21 15:38:15.270623] I [glusterd-utils.c:5994:glusterd_brick_start] 0-management: discovered already-running brick /var/lib/heketi/mounts/vg_ae7c5467f294a953bfb274c1e6afc26d/brick_c5d298199cfbc91ebf013707709bdbb6/brick
[2019-01-21 15:38:15.270630] I [MSGID: 106142] [glusterd-pmap.c:297:pmap_registry_bind] 0-pmap: adding brick /var/lib/heketi/mounts/vg_ae7c5467f294a953bfb274c1e6afc26d/brick_c5d298199cfbc91ebf013707709bdbb6/brick on port 49160
[2019-01-21 15:38:15.270645] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2019-01-21 15:38:15.270789] I [glusterd-utils.c:5994:glusterd_brick_start] 0-management: discovered already-running brick /var/lib/heketi/mounts/vg_ae7c5467f294a953bfb274c1e6afc26d/brick_fd6a1a83004a11c0007e9392913aea8f/brick
[2019-01-21 15:38:15.270796] I [MSGID: 106142] [glusterd-pmap.c:297:pmap_registry_bind] 0-pmap: adding brick /var/lib/heketi/mounts/vg_ae7c5467f294a953bfb274c1e6afc26d/brick_fd6a1a83004a11c0007e9392913aea8f/brick on port 49168
[2019-01-21 15:38:15.270809] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2019-01-21 15:38:15.310864] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-nfs: setting frame-timeout to 600
[2019-01-21 15:38:15.310974] I [MSGID: 106131] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: nfs already stopped
[2019-01-21 15:38:15.311012] I [MSGID: 106568] [glusterd-svc-mgmt.c:235:glusterd_svc_stop] 0-management: nfs service is stopped
[2019-01-21 15:38:15.311048] I [MSGID: 106599] [glusterd-nfs-svc.c:82:glusterd_nfssvc_manager] 0-management: nfs/server.so xlator is not installed
[2019-01-21 15:38:15.311083] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-glustershd: setting frame-timeout to 600
[2019-01-21 15:38:15.317788] I [MSGID: 106568] [glusterd-proc-mgmt.c:87:glusterd_proc_stop] 0-management: Stopping glustershd daemon running in pid: 113606
[2019-01-21 15:38:16.318271] I [MSGID: 106568] [glusterd-svc-mgmt.c:235:glusterd_svc_stop] 0-management: glustershd service is stopped
[2019-01-21 15:38:16.318431] I [MSGID: 106567] [glusterd-svc-mgmt.c:203:glusterd_svc_start] 0-management: Starting glustershd service
[2019-01-21 15:38:17.322730] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-quotad: setting frame-timeout to 600
[2019-01-21 15:38:17.323728] I [MSGID: 106131] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad already stopped
[2019-01-21 15:38:17.323766] I [MSGID: 106568] [glusterd-svc-mgmt.c:235:glusterd_svc_stop] 0-management: quotad service is stopped
[2019-01-21 15:38:17.323812] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-bitd: setting frame-timeout to 600
[2019-01-21 15:38:17.324303] I [MSGID: 106131] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped
[2019-01-21 15:38:17.324326] I [MSGID: 106568] [glusterd-svc-mgmt.c:235:glusterd_svc_stop] 0-management: bitd service is stopped
[2019-01-21 15:38:17.324368] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-scrub: setting frame-timeout to 600
[2019-01-21 15:38:17.324887] I [MSGID: 106131] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped
[2019-01-21 15:38:17.324916] I [MSGID: 106568] [glusterd-svc-mgmt.c:235:glusterd_svc_stop] 0-management: scrub service is stopped
[2019-01-21 15:38:18.832633] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-snapd: setting frame-timeout to 600
[2019-01-21 15:38:18.832850] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-snapd: setting frame-timeout to 600
[2019-01-21 15:38:18.832987] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-snapd: setting frame-timeout to 600
[2019-01-21 15:38:18.833132] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-snapd: setting frame-timeout to 600
[2019-01-21 15:38:18.833268] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-snapd: setting frame-timeout to 600
[2019-01-21 15:38:18.833398] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-snapd: setting frame-timeout to 600
[2019-01-21 15:38:18.833594] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-snapd: setting frame-timeout to 600
[2019-01-21 15:38:18.833753] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-snapd: setting frame-timeout to 600
[2019-01-21 15:38:18.833888] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-snapd: setting frame-timeout to 600
[2019-01-21 15:38:18.834028] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-snapd: setting frame-timeout to 600
[2019-01-21 15:38:18.834163] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-snapd: setting frame-timeout to 600
[2019-01-21 15:38:18.834286] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-snapd: setting frame-timeout to 600
[2019-01-21 15:38:18.834414] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-snapd: setting frame-timeout to 600
[2019-01-21 15:38:18.834555] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-snapd: setting frame-timeout to 600
[2019-01-21 15:38:18.834697] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-snapd: setting frame-timeout to 600
[2019-01-21 15:38:18.834822] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-snapd: setting frame-timeout to 600
[2019-01-21 15:38:18.834960] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-snapd: setting frame-timeout to 600
[2019-01-21 15:38:18.835082] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-gfproxyd: setting frame-timeout to 600
[2019-01-21 15:38:18.835267] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-gfproxyd: setting frame-timeout to 600
[2019-01-21 15:38:18.835417] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-gfproxyd: setting frame-timeout to 600
[2019-01-21 15:38:18.835594] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-gfproxyd: setting frame-timeout to 600
[2019-01-21 15:38:18.835755] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-gfproxyd: setting frame-timeout to 600
[2019-01-21 15:38:18.835927] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-gfproxyd: setting frame-timeout to 600
[2019-01-21 15:38:18.836097] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-gfproxyd: setting frame-timeout to 600
[2019-01-21 15:38:18.836265] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-gfproxyd: setting frame-timeout to 600
[2019-01-21 15:38:18.836427] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-gfproxyd: setting frame-timeout to 600
[2019-01-21 15:38:18.836598] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-gfproxyd: setting frame-timeout to 600
[2019-01-21 15:38:18.836760] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-gfproxyd: setting frame-timeout to 600
[2019-01-21 15:38:18.836917] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-gfproxyd: setting frame-timeout to 600
[2019-01-21 15:38:18.837067] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-gfproxyd: setting frame-timeout to 600
[2019-01-21 15:38:18.837240] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-gfproxyd: setting frame-timeout to 600
[2019-01-21 15:38:18.837393] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-gfproxyd: setting frame-timeout to 600
[2019-01-21 15:38:18.837550] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-gfproxyd: setting frame-timeout to 600
[2019-01-21 15:38:18.837721] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-gfproxyd: setting frame-timeout to 600
[2019-01-21 15:38:18.837905] I [MSGID: 106492] [glusterd-handler.c:2726:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 0574c4d2-6900-4447-a752-2bf7477b443e
[2019-01-21 15:38:18.837946] I [MSGID: 106502] [glusterd-handler.c:2771:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2019-01-21 15:38:18.844246] I [MSGID: 106493] [glusterd-rpc-ops.c:702:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 0574c4d2-6900-4447-a752-2bf7477b443e
[2019-01-21 15:38:18.844305] I [MSGID: 106493] [glusterd-rpc-ops.c:486:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: 25b5f142-4890-4315-a352-cf947fdf649c, host: 192.168.89.220, port: 0
[2019-01-21 15:38:18.855773] I [MSGID: 106142] [glusterd-pmap.c:297:pmap_registry_bind] 0-pmap: adding brick /var/lib/heketi/mounts/vg_3343a86e75865dd02b054fb268781815/brick_774b52d13a2cf996784181cf6d7db93c/brick on port 49156
[2019-01-21 15:38:18.857867] I [MSGID: 106163] [glusterd-handshake.c:1356:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 40100
[2019-01-21 15:38:18.866315] I [MSGID: 106493] [glusterd-rpc-ops.c:702:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 25b5f142-4890-4315-a352-cf947fdf649c
[2019-01-21 15:38:18.867481] I [MSGID: 106142] [glusterd-pmap.c:297:pmap_registry_bind] 0-pmap: adding brick /var/lib/heketi/mounts/vg_ae7c5467f294a953bfb274c1e6afc26d/brick_c5d298199cfbc91ebf013707709bdbb6/brick on port 49160
[2019-01-21 15:38:18.868546] I [MSGID: 106142] [glusterd-pmap.c:297:pmap_registry_bind] 0-pmap: adding brick /var/lib/heketi/mounts/vg_ae7c5467f294a953bfb274c1e6afc26d/brick_93eb3caff9f179d522b5498b515bf991/brick on port 49162
[2019-01-21 15:38:18.869579] I [MSGID: 106142] [glusterd-pmap.c:297:pmap_registry_bind] 0-pmap: adding brick /var/lib/heketi/mounts/vg_3343a86e75865dd02b054fb268781815/brick_c607b4aed1593917939ba85df5eefaae/brick on port 49161
[2019-01-21 15:38:18.870605] I [MSGID: 106142] [glusterd-pmap.c:297:pmap_registry_bind] 0-pmap: adding brick /var/lib/heketi/mounts/vg_3343a86e75865dd02b054fb268781815/brick_bb29a705b3f39522b131277d64761033/brick on port 49172
[2019-01-21 15:38:18.870675] I [MSGID: 106142] [glusterd-pmap.c:297:pmap_registry_bind] 0-pmap: adding brick /var/lib/heketi/mounts/vg_ae7c5467f294a953bfb274c1e6afc26d/brick_e26c7a93f2a7b6d6455ab5fa7615dfb1/brick on port 49167
[2019-01-21 15:38:18.871696] I [MSGID: 106492] [glusterd-handler.c:2726:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 25b5f142-4890-4315-a352-cf947fdf649c
[2019-01-21 15:38:18.876594] I [MSGID: 106502] [glusterd-handler.c:2771:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2019-01-21 15:38:18.876696] I [MSGID: 106142] [glusterd-pmap.c:297:pmap_registry_bind] 0-pmap: adding brick /var/lib/heketi/mounts/vg_ae7c5467f294a953bfb274c1e6afc26d/brick_3c16802735df66a463468ce8262ef0a6/brick on port 49171
[2019-01-21 15:38:18.876768] I [MSGID: 106490] [glusterd-handler.c:2548:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: 0574c4d2-6900-4447-a752-2bf7477b443e
[2019-01-21 15:38:18.894122] I [MSGID: 106493] [glusterd-handler.c:3811:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to app1.matrix.nokia.com (0), ret: 0, op_ret: 0
[2019-01-21 15:38:18.924664] I [MSGID: 106492] [glusterd-handler.c:2726:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 0574c4d2-6900-4447-a752-2bf7477b443e
[2019-01-21 15:38:18.924689] I [MSGID: 106502] [glusterd-handler.c:2771:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2019-01-21 15:38:18.935666] I [MSGID: 106163] [glusterd-handshake.c:1356:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 40100
[2019-01-21 15:38:18.958387] I [MSGID: 106493] [glusterd-rpc-ops.c:702:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 0574c4d2-6900-4447-a752-2bf7477b443e
[2019-01-21 15:38:18.964825] I [MSGID: 106490] [glusterd-handler.c:2548:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: 25b5f142-4890-4315-a352-cf947fdf649c
[2019-01-21 15:38:18.972404] I [MSGID: 106493] [glusterd-handler.c:3811:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to 192.168.89.220 (0), ret: 0, op_ret: 0
[2019-01-21 15:38:18.998535] I [MSGID: 106492] [glusterd-handler.c:2726:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 25b5f142-4890-4315-a352-cf947fdf649c
[2019-01-21 15:38:19.007773] I [MSGID: 106502] [glusterd-handler.c:2771:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2019-01-21 15:38:19.008485] I [MSGID: 106493] [glusterd-rpc-ops.c:702:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 25b5f142-4890-4315-a352-cf947fdf649c
[2019-01-21 15:39:49.020865] I [MSGID: 106487] [glusterd-handler.c:1486:__glusterd_handle_cli_list_friends] 0-glusterd: Received cli list req
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux