Re: Issues with replicated gluster volume

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Karthik,

Please find attached logs.

kindly suggest on how to make the volume high available. 

Thanks,
Ahemad



On Tuesday, 16 June, 2020, 12:09:10 pm IST, Karthik Subrahmanya <ksubrahm@xxxxxxxxxx> wrote:


Hi,

Thanks for the clarification.
In that case can you attach complete glusterd, bricks and mount logs from all the nodes when this happened.
Also paste the output that you are seeing when you try to access or do operations on the mount point.

Regards,
Karthik

On Tue, Jun 16, 2020 at 11:55 AM ahemad shaik <ahemad_shaik@xxxxxxxxx> wrote:
Sorry, It was a typo.

The command i exact command i have used is below.

The volume is mounted on node4.

""mount -t glusterfs node1:/glustervol /mnt/ ""


gluster Volume is created from node1,node2 and node3. 

""gluster volume create glustervol replica 3 transport tcp node1:/data node2:/data node3:/data force""

I have tried rebooting node3 to test high availability. 

I hope it is clear now.

Please let me know if any questions.

Thanks,
Ahemad 



On Tuesday, 16 June, 2020, 11:45:48 am IST, Karthik Subrahmanya <ksubrahm@xxxxxxxxxx> wrote:


Hi Ahemad,

A quick question on the mount command that you have used
"mount -t glusterfs node4:/glustervol    /mnt/"
Here you are specifying the hostname as node4 instead of node{1,2,3} which actually host the volume that you intend to mount. Is this a typo or did you paste the same command that you used for mounting?
If it is the actual command that you have used, then node4 seems to have some old volume details which are not cleared properly and it is being used while mounting. According to the peer info that you provided, only node1, 2 & 3 are part of the list, so node4 is unaware of the volume that you want to mount and this command is mounting a volume which is only visible to node4.

Regards,
Karthik

On Tue, Jun 16, 2020 at 11:11 AM ahemad shaik <ahemad_shaik@xxxxxxxxx> wrote:
Hi Karthik,


Please provide the following info, I see there are errors unable to connect to port and warning that transport point end connected. Please find the complete logs below.

kindly suggest.

1. gluster peer status

gluster peer status
Number of Peers: 2

Hostname: node1
Uuid: 0e679115-15ad-4a85-9d0a-9178471ef90
State: Peer in Cluster (Connected)

Hostname: node2
Uuid: 785a7c5b-86d3-45b9-b371-7e66e7fa88e0
State: Peer in Cluster (Connected)


gluster pool list
UUID                                    Hostname                                State
0e679115-15ad-4a85-9d0a-9178471ef90     node1         Connected
785a7c5b-86d3-45b9-b371-7e66e7fa88e0    node2                                   Connected
ec137af6-4845-4ebb-955a-fac1df9b7b6c    localhost(node3)                        Connected

2. gluster volume info glustervol

Volume Name: glustervol
Type: Replicate
Volume ID: 5422bb27-1863-47d5-b216-61751a01b759
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: node1:/data
Brick2: node2:/data
Brick3: node3:/data
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: on
storage.fips-mode-rchecksum: on
transport.address-family: inet

3. gluster volume status glustervol

gluster volume status glustervol
Status of volume: glustervol
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick node1:/data                            49152     0          Y       59739
Brick node2:/data                            49153     0          Y       3498
Brick node3:/data                            49152     0          Y       1880
Self-heal Daemon on localhost                N/A       N/A        Y       1905
Self-heal Daemon on node1                    N/A       N/A        Y       3519
Self-heal Daemon on node2                    N/A       N/A        Y       59760

Task Status of Volume glustervol
------------------------------------------------------------------------------
There are no active volume tasks

4. client log from node4 when you saw unavailability-

Below are the logs when i reboot server node3, we can see in logs that "0-glustervol-client-2: disconnected from glustervol-client-2".

Please find the complete logs below from the reboot to until the server available. I am testing high availability by just rebooting server. In real case scenario chances are there that server may not available for some hours so i just dont want to have the long down time.


[2020-06-16 05:14:25.256136] I [MSGID: 114046] [client-handshake.c:1105:client_setvolume_cbk] 0-glustervol-client-0: Connected to glustervol-client-0, attached to remote volume '/data'.
[2020-06-16 05:14:25.256179] I [MSGID: 108005] [afr-common.c:5247:__afr_handle_child_up_event] 0-glustervol-replicate-0: Subvolume 'glustervol-client-0' came back up; going online.
[2020-06-16 05:14:25.257972] I [MSGID: 114046] [client-handshake.c:1105:client_setvolume_cbk] 0-glustervol-client-1: Connected to glustervol-client-1, attached to remote volume '/data'.
[2020-06-16 05:14:25.258014] I [MSGID: 108002] [afr-common.c:5609:afr_notify] 0-glustervol-replicate-0: Client-quorum is met
[2020-06-16 05:14:25.260312] I [MSGID: 114046] [client-handshake.c:1105:client_setvolume_cbk] 0-glustervol-client-2: Connected to glustervol-client-2, attached to remote volume '/data'.
[2020-06-16 05:14:25.261935] I [fuse-bridge.c:5145:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 kernel 7.23
[2020-06-16 05:14:25.261957] I [fuse-bridge.c:5756:fuse_graph_sync] 0-fuse: switched to graph 0
[2020-06-16 05:16:59.729400] I [MSGID: 114018] [client.c:2331:client_rpc_notify] 0-glustervol-client-2: disconnected from glustervol-client-2. Client process will keep trying to connect to glusterd until brick's port is available
[2020-06-16 05:16:59.730053] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 05:16:08.175698 (xid=0xae)
[2020-06-16 05:16:59.730089] W [MSGID: 114031] [client-rpc-fops_v2.c:2634:client4_0_lookup_cbk] 0-glustervol-client-2: remote operation failed. Path: / (00000000-0000-0000-0000-000000000001) [Transport endpoint is not connected]
[2020-06-16 05:16:59.730336] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 05:16:10.237849 (xid=0xaf)
[2020-06-16 05:16:59.730540] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 05:16:22.694419 (xid=0xb0)
[2020-06-16 05:16:59.731132] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 05:16:27.574139 (xid=0xb1)
[2020-06-16 05:16:59.731319] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GF-DUMP) op(NULL(2)) called at 2020-06-16 05:16:34.231433 (xid=0xb2)
[2020-06-16 05:16:59.731352] W [rpc-clnt-ping.c:210:rpc_clnt_ping_cbk] 0-glustervol-client-2: socket disconnected
[2020-06-16 05:16:59.731464] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 05:16:41.213884 (xid=0xb3)
[2020-06-16 05:16:59.731650] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 05:16:48.756212 (xid=0xb4)
[2020-06-16 05:16:59.731876] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 05:16:52.258940 (xid=0xb5)
[2020-06-16 05:16:59.732060] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 05:16:54.618301 (xid=0xb6)
[2020-06-16 05:16:59.732246] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 05:16:58.288790 (xid=0xb7)
[2020-06-16 05:17:10.245302] I [rpc-clnt.c:2028:rpc_clnt_reconfig] 0-glustervol-client-2: changing port to 49152 (from 0)
[2020-06-16 05:17:10.249896] I [MSGID: 114046] [client-handshake.c:1105:client_setvolume_cbk] 0-glustervol-client-2: Connected to glustervol-client-2, attached to remote volume '/data'
The message "W [MSGID: 114031] [client-rpc-fops_v2.c:2634:client4_0_lookup_cbk] 0-glustervol-client-2: remote operation failed. Path: / (00000000-0000-0000-0000-000000000001) [Transport endpoint is not connected]" repeated 8 times between [2020-06-16 05:16:59.730089] and [2020-06-16 05:16:59.732278]

Thanks,
Ahemad

On Tuesday, 16 June, 2020, 10:58:42 am IST, ahemad shaik <ahemad_shaik@xxxxxxxxx> wrote:


Hi Karthik,

Please find the details below.

Please provide the following info:
1. gluster peer status

gluster peer status
Number of Peers: 2

Hostname: node1
Uuid: 0e679115-15ad-4a85-9d0a-9178471ef90
State: Peer in Cluster (Connected)

Hostname: node2
Uuid: 785a7c5b-86d3-45b9-b371-7e66e7fa88e0
State: Peer in Cluster (Connected)


gluster pool list
UUID                                    Hostname                                State
0e679115-15ad-4a85-9d0a-9178471ef90     node1 Connected
785a7c5b-86d3-45b9-b371-7e66e7fa88e0    node2                                   Connected
ec137af6-4845-4ebb-955a-fac1df9b7b6c    localhost(node3)                        Connected

2. gluster volume info glustervol

Volume Name: glustervol
Type: Replicate
Volume ID: 5422bb27-1863-47d5-b216-61751a01b759
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: node1:/data
Brick2: node2:/data
Brick3: node3:/data
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: on
storage.fips-mode-rchecksum: on
transport.address-family: inet

3. gluster volume status glustervol

gluster volume status glustervol
Status of volume: glustervol
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick node1:/data                            49152     0          Y       59739
Brick node2:/data                            49153     0          Y       3498
Brick node3:/data                            49152     0          Y       1880
Self-heal Daemon on localhost                N/A       N/A        Y       1905
Self-heal Daemon on node1                    N/A       N/A        Y       3519
Self-heal Daemon on node2                    N/A       N/A        Y       59760

Task Status of Volume glustervol
------------------------------------------------------------------------------
There are no active volume tasks

4. client log from node4 when you saw unavailability-

Below are the logs when i reboot server node3, we can see in logs that "0-glustervol-client-2: disconnected from glustervol-client-2".

Please find the complete logs below from the reboot to until the server available. I am testing high availability by just rebooting server. In real case scenario chances are there that server may not available for some hours so we just dont want to have the long down time.


[2020-06-16 05:14:25.256136] I [MSGID: 114046] [client-handshake.c:1105:client_setvolume_cbk] 0-glustervol-client-0: Connected to glustervol-client-0, attached to remote volume '/data'.
[2020-06-16 05:14:25.256179] I [MSGID: 108005] [afr-common.c:5247:__afr_handle_child_up_event] 0-glustervol-replicate-0: Subvolume 'glustervol-client-0' came back up; going online.
[2020-06-16 05:14:25.257972] I [MSGID: 114046] [client-handshake.c:1105:client_setvolume_cbk] 0-glustervol-client-1: Connected to glustervol-client-1, attached to remote volume '/data'.
[2020-06-16 05:14:25.258014] I [MSGID: 108002] [afr-common.c:5609:afr_notify] 0-glustervol-replicate-0: Client-quorum is met
[2020-06-16 05:14:25.260312] I [MSGID: 114046] [client-handshake.c:1105:client_setvolume_cbk] 0-glustervol-client-2: Connected to glustervol-client-2, attached to remote volume '/data'.
[2020-06-16 05:14:25.261935] I [fuse-bridge.c:5145:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 kernel 7.23
[2020-06-16 05:14:25.261957] I [fuse-bridge.c:5756:fuse_graph_sync] 0-fuse: switched to graph 0
[2020-06-16 05:16:59.729400] I [MSGID: 114018] [client.c:2331:client_rpc_notify] 0-glustervol-client-2: disconnected from glustervol-client-2. Client process will keep trying to connect to glusterd until brick's port is available
[2020-06-16 05:16:59.730053] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 05:16:08.175698 (xid=0xae)
[2020-06-16 05:16:59.730089] W [MSGID: 114031] [client-rpc-fops_v2.c:2634:client4_0_lookup_cbk] 0-glustervol-client-2: remote operation failed. Path: / (00000000-0000-0000-0000-000000000001) [Transport endpoint is not connected]
[2020-06-16 05:16:59.730336] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 05:16:10.237849 (xid=0xaf)
[2020-06-16 05:16:59.730540] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 05:16:22.694419 (xid=0xb0)
[2020-06-16 05:16:59.731132] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 05:16:27.574139 (xid=0xb1)
[2020-06-16 05:16:59.731319] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GF-DUMP) op(NULL(2)) called at 2020-06-16 05:16:34.231433 (xid=0xb2)
[2020-06-16 05:16:59.731352] W [rpc-clnt-ping.c:210:rpc_clnt_ping_cbk] 0-glustervol-client-2: socket disconnected
[2020-06-16 05:16:59.731464] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 05:16:41.213884 (xid=0xb3)
[2020-06-16 05:16:59.731650] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 05:16:48.756212 (xid=0xb4)
[2020-06-16 05:16:59.731876] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 05:16:52.258940 (xid=0xb5)
[2020-06-16 05:16:59.732060] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 05:16:54.618301 (xid=0xb6)
[2020-06-16 05:16:59.732246] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 05:16:58.288790 (xid=0xb7)
[2020-06-16 05:17:10.245302] I [rpc-clnt.c:2028:rpc_clnt_reconfig] 0-glustervol-client-2: changing port to 49152 (from 0)
[2020-06-16 05:17:10.249896] I [MSGID: 114046] [client-handshake.c:1105:client_setvolume_cbk] 0-glustervol-client-2: Connected to glustervol-client-2, attached to remote volume '/data'.

Thanks,
Ahemad

On Tuesday, 16 June, 2020, 10:10:16 am IST, Karthik Subrahmanya <ksubrahm@xxxxxxxxxx> wrote:


Hi Ahemad,

Please provide the following info:
1. gluster peer status
2. gluster volume info glustervol
3. gluster volume status glustervol
4. client log from node4 when you saw unavailability

Regards,
Karthik

On Mon, Jun 15, 2020 at 11:07 PM ahemad shaik <ahemad_shaik@xxxxxxxxx> wrote:
Hi There,

I have created 3 replica gluster volume with 3 bricks from 3 nodes.

"gluster volume create glustervol replica 3 transport tcp node1:/data node2:/data node3:/data force"

mounted on client node using below command.

"mount -t glusterfs node4:/glustervol    /mnt/"

when any of the node (either node1,node2 or node3) goes down, gluster mount/volume (/mnt) not accessible at client (node4).

purpose of replicated volume is high availability but not able to achieve it.

Is it a bug or i am missing anything.


Any suggestions will be great help!!!

kindly suggest.

Thanks,
Ahemad  
 
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users
node1:

[2020-06-16 05:14:25.252416] I [addr.c:54:compare_addr_and_update] 0-/data: allowed = "*", received addr = "17.18.11.14"
[2020-06-16 05:14:25.252504] I [MSGID: 115029] [server-handshake.c:552:server_setvolume] 0-glustervol-server: accepted client from CTX_ID:ff0af68b-bc9a-4269-8589-f7ef60e27f5e-GRAPH_ID:0-PID:100753-HOST:node4-PC_NAME:glustervol-client-0-RECON_NO:-0 (version: 6.9) with subvol /data
[2020-06-16 05:16:42.938464] I [MSGID: 115036] [server.c:501:server_rpc_notify] 0-glustervol-server: disconnecting connection from CTX_ID:2ae6a66f-c58b-486f-8cee-02aaa1032f9b-GRAPH_ID:0-PID:1905-HOST:nodde3-PC_NAME:glustervol-client-0-RECON_NO:-0
[2020-06-16 05:16:42.939986] I [MSGID: 101055] [client_t.c:436:gf_client_unref] 0-glustervol-server: Shutting down connection CTX_ID:2ae6a66f-c58b-486f-8cee-02aaa1032f9b-GRAPH_ID:0-PID:1905-HOST:node3-PC_NAME:glustervol-client-0-RECON_NO:-0
[2020-06-16 05:16:45.999582] I [addr.c:54:compare_addr_and_update] 0-/data: allowed = "*", received addr = "49.11.99.79"
[2020-06-16 05:16:45.999646] I [login.c:110:gf_auth] 0-auth/login: allowed user names: 15e6dd85-fb90-4caf-971c-625ffec71229
[2020-06-16 05:16:45.999659] I [MSGID: 115029] [server-handshake.c:552:server_setvolume] 0-glustervol-server: accepted client from CTX_ID:9628bd0e-07f7-4fba-abee-fe10bdd87944-GRAPH_ID:0-PID:2316-HOST:node3:glustervol-client-0-RECON_NO:-0 (version: 7.5) with subvol /data
[2020-06-16 06:17:36.846266] I [MSGID: 115036] [server.c:501:server_rpc_notify] 0-glustervol-server: disconnecting connection from CTX_ID:9628bd0e-07f7-4fba-abee-fe10bdd87944-GRAPH_ID:0-PID:2316-HOST:node3-PC_NAME:glustervol-client-0-RECON_NO:-0
[2020-06-16 06:17:36.846883] I [MSGID: 101055] [client_t.c:436:gf_client_unref] 0-glustervol-server: Shutting down connection CTX_ID:9628bd0e-07f7-4fba-abee-fe10bdd87944-GRAPH_ID:0-PID:2316-HOST:node3-PC_NAME:glustervol-client-0-RECON_NO:-0
[2020-06-16 06:17:46.746615] I [addr.c:54:compare_addr_and_update] 0-/data: allowed = "*", received addr = "49.11.99.79"
[2020-06-16 06:17:46.746682] I [login.c:110:gf_auth] 0-auth/login: allowed user names: 15e6dd85-fb90-4caf-971c-625ffec71229
[2020-06-16 06:17:46.746723] I [MSGID: 115029] [server-handshake.c:552:server_setvolume] 0-glustervol-server: accepted client from CTX_ID:464d653b-1c3c-46e2-bfe0-d26c59860370-GRAPH_ID:0-PID:1894-HOSTnode3-PC_NAME:glustervol-client-0-RECON_NO:-0 (version: 7.5) with subvol /data
[2020-06-16 07:05:16.302332] I [MSGID: 115036] [server.c:501:server_rpc_notify] 0-glustervol-server: disconnecting connection from CTX_ID:464d653b-1c3c-46e2-bfe0-d26c59860370-GRAPH_ID:0-PID:1894-HOSTnode3-PC_NAME:glustervol-client-0-RECON_NO:-0
[2020-06-16 07:05:16.302909] I [MSGID: 101055] [client_t.c:436:gf_client_unref] 0-glustervol-server: Shutting down connection CTX_ID:464d653b-1c3c-46e2-bfe0-d26c59860370-GRAPH_ID:0-PID:1894-HOSTnode3-PC_NAME:glustervol-client-0-RECON_NO:-0
[2020-06-16 07:05:23.621957] I [addr.c:54:compare_addr_and_update] 0-/data: allowed = "*", received addr = "49.11.99.79"
[2020-06-16 07:05:23.622023] I [login.c:110:gf_auth] 0-auth/login: allowed user names: 15e6dd85-fb90-4caf-971c-625ffec71229
[2020-06-16 07:05:23.622038] I [MSGID: 115029] [server-handshake.c:552:server_setvolume] 0-glustervol-server: accepted client from CTX_ID:047b5329-a111-4e09-8fe2-ad632369e4db-GRAPH_ID:0-PID:1995-HOSTnode3-PC_NAME:glustervol-client-0-RECON_NO:-0 (version: 7.5) with subvol /data
[2020-06-16 05:14:25.228229] I [MSGID: 106496] [glusterd-handshake.c:935:__server_getspec] 0-management: Received mount request for volume /glustervol
[2020-06-16 05:16:06.513210] I [MSGID: 106004] [glusterd-handler.c:6204:__glusterd_peer_rpc_notify] 0-management: Peer <node4> (<ec137af6-4845-4ebb-955a-fac1d19b7b6c>), in state <Peer in Cluster>, has disconnected from glusterd.
[2020-06-16 05:16:06.513546] W [glusterd-locks.c:796:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x235fa) [0x7fde4f2f25fa] -->/usr/lib64/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2e300) [0x7fde4f2fd300] -->/usr/lib64/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xe8773) [0x7fde4f3b7773] ) 0-management: Lock for vol glustervol not held
[2020-06-16 05:16:06.513563] W [MSGID: 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management: Lock not released for glustervol
[2020-06-16 05:16:43.494497] I [MSGID: 106163] [glusterd-handshake.c:1433:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 70200
[2020-06-16 05:16:43.499901] I [MSGID: 106490] [glusterd-handler.c:2434:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: ec137af6-4845-4ebb-955a-fac1d19b7b6c
[2020-06-16 05:16:44.940578] I [MSGID: 106493] [glusterd-handler.c:3715:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to node4 (0), ret: 0, op_ret: 0
[2020-06-16 05:16:44.943644] I [MSGID: 106492] [glusterd-handler.c:2619:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: ec137af6-4845-4ebb-955a-fac1d19b7b6c
[2020-06-16 05:16:44.943681] I [MSGID: 106502] [glusterd-handler.c:2660:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2020-06-16 05:16:44.946366] I [MSGID: 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding svc glustershd (volume=glustervol) to existing process with pid 59760
[2020-06-16 05:16:44.947064] I [MSGID: 106617] [glusterd-svc-helper.c:680:glusterd_svc_attach_cbk] 0-management: svc glustershd of volume glustervol attached successfully to pid 59760
[2020-06-16 05:16:45.960838] I [MSGID: 106493] [glusterd-rpc-ops.c:681:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: ec137af6-4845-4ebb-955a-fac1d19b7b6c
[2020-06-16 05:16:45.967842] I [MSGID: 106493] [glusterd-rpc-ops.c:468:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: ec137af6-4845-4ebb-955a-fac1d19b7b6c, host: node4, port: 0
[2020-06-16 05:16:45.969591] I [MSGID: 106492] [glusterd-handler.c:2619:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: ec137af6-4845-4ebb-955a-fac1d19b7b6c
[2020-06-16 05:16:45.969655] I [MSGID: 106502] [glusterd-handler.c:2660:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2020-06-16 05:16:45.971904] I [MSGID: 106493] [glusterd-rpc-ops.c:681:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: ec137af6-4845-4ebb-955a-fac1d19b7b6c
[2020-06-16 06:17:08.353960] I [MSGID: 106004] [glusterd-handler.c:6204:__glusterd_peer_rpc_notify] 0-management: Peer <node4> (<ec137af6-4845-4ebb-955a-fac1d19b7b6c>), in state <Peer in Cluster>, has disconnected from glusterd.
[2020-06-16 06:17:08.354324] W [glusterd-locks.c:796:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x235fa) [0x7fde4f2f25fa] -->/usr/lib64/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2e300) [0x7fde4f2fd300] -->/usr/lib64/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xe8773) [0x7fde4f3b7773] ) 0-management: Lock for vol glustervol not held
[2020-06-16 06:17:08.354342] W [MSGID: 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management: Lock not released for glustervol
[2020-06-16 06:17:45.269584] I [MSGID: 106163] [glusterd-handshake.c:1433:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 70200
[2020-06-16 06:17:45.275547] I [MSGID: 106490] [glusterd-handler.c:2434:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: ec137af6-4845-4ebb-955a-fac1d19b7b6c
[2020-06-16 06:17:45.694020] I [MSGID: 106493] [glusterd-handler.c:3715:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to node4 (0), ret: 0, op_ret: 0
[2020-06-16 06:17:45.697795] I [MSGID: 106492] [glusterd-handler.c:2619:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: ec137af6-4845-4ebb-955a-fac1d19b7b6c
[2020-06-16 06:17:45.697837] I [MSGID: 106502] [glusterd-handler.c:2660:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2020-06-16 06:17:45.701231] I [MSGID: 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding svc glustershd (volume=glustervol) to existing process with pid 59760
[2020-06-16 06:17:45.702390] I [MSGID: 106617] [glusterd-svc-helper.c:680:glusterd_svc_attach_cbk] 0-management: svc glustershd of volume glustervol attached successfully to pid 59760
[2020-06-16 06:17:46.717061] I [MSGID: 106493] [glusterd-rpc-ops.c:681:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: ec137af6-4845-4ebb-955a-fac1d19b7b6c
[2020-06-16 06:17:46.725422] I [MSGID: 106493] [glusterd-rpc-ops.c:468:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: ec137af6-4845-4ebb-955a-fac1d19b7b6c, host: node4, port: 0
[2020-06-16 06:17:46.726769] I [MSGID: 106492] [glusterd-handler.c:2619:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: ec137af6-4845-4ebb-955a-fac1d19b7b6c
[2020-06-16 06:17:46.726807] I [MSGID: 106502] [glusterd-handler.c:2660:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2020-06-16 06:17:46.729184] I [MSGID: 106493] [glusterd-rpc-ops.c:681:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: ec137af6-4845-4ebb-955a-fac1d19b7b6c
[2020-06-16 06:54:41.575352] E [MSGID: 101097] [xlator.c:218:xlator_volopt_dynload] 0-xlator: dlsym(xlator_api) missing: /usr/lib64/glusterfs/7.5/rpc-transport/socket.so: undefined symbol: xlator_api
[2020-06-16 06:54:41.576804] W [MSGID: 101095] [xlator.c:210:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/7.5/xlator/nfs/server.so: cannot open shared object file: No such file or directory
[2020-06-16 06:54:41.588691] W [MSGID: 101095] [xlator.c:210:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/7.5/xlator/storage/bd.so: cannot open shared object file: No such file or directory
The message "E [MSGID: 101097] [xlator.c:218:xlator_volopt_dynload] 0-xlator: dlsym(xlator_api) missing: /usr/lib64/glusterfs/7.5/rpc-transport/socket.so: undefined symbol: xlator_api" repeated 7 times between [2020-06-16 06:54:41.575352] and [2020-06-16 06:54:41.575510]
The message "W [MSGID: 101095] [xlator.c:210:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/7.5/xlator/nfs/server.so: cannot open shared object file: No such file or directory" repeated 30 times between [2020-06-16 06:54:41.576804] and [2020-06-16 06:54:41.577267]
[2020-06-16 07:04:45.626454] I [MSGID: 106004] [glusterd-handler.c:6204:__glusterd_peer_rpc_notify] 0-management: Peer <node4> (<ec137af6-4845-4ebb-955a-fac1d19b7b6c>), in state <Peer in Cluster>, has disconnected from glusterd.
[2020-06-16 07:04:45.626778] W [glusterd-locks.c:796:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x235fa) [0x7fde4f2f25fa] -->/usr/lib64/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2e300) [0x7fde4f2fd300] -->/usr/lib64/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xe8773) [0x7fde4f3b7773] ) 0-management: Lock for vol glustervol not held
[2020-06-16 07:04:45.626796] W [MSGID: 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management: Lock not released for glustervol
[2020-06-16 07:04:48.061820] I [MSGID: 106499] [glusterd-handler.c:4264:__glusterd_handle_status_volume] 0-management: Received status volume req for volume glustervol
[2020-06-16 07:05:21.900573] I [MSGID: 106163] [glusterd-handshake.c:1433:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 70200
[2020-06-16 07:05:21.902922] I [MSGID: 106490] [glusterd-handler.c:2434:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: ec137af6-4845-4ebb-955a-fac1d19b7b6c
The message "I [MSGID: 106499] [glusterd-handler.c:4264:__glusterd_handle_status_volume] 0-management: Received status volume req for volume glustervol" repeated 17 times between [2020-06-16 07:04:48.061820] and [2020-06-16 07:05:22.571451]
[2020-06-16 07:05:22.705001] I [MSGID: 106493] [glusterd-handler.c:3715:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to node4 (0), ret: 0, op_ret: 0
[2020-06-16 07:05:22.709620] I [MSGID: 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding svc glustershd (volume=glustervol) to existing process with pid 59760
[2020-06-16 07:05:22.710716] I [MSGID: 106617] [glusterd-svc-helper.c:680:glusterd_svc_attach_cbk] 0-management: svc glustershd of volume glustervol attached successfully to pid 59760
[2020-06-16 07:05:23.590454] I [MSGID: 106492] [glusterd-handler.c:2619:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: ec137af6-4845-4ebb-955a-fac1d19b7b6c
[2020-06-16 07:05:23.590574] I [MSGID: 106502] [glusterd-handler.c:2660:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2020-06-16 07:05:23.596050] I [MSGID: 106493] [glusterd-rpc-ops.c:681:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: ec137af6-4845-4ebb-955a-fac1d19b7b6c
[2020-06-16 07:05:23.602422] I [MSGID: 106493] [glusterd-rpc-ops.c:468:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: ec137af6-4845-4ebb-955a-fac1d19b7b6c, host: node4, port: 0
[2020-06-16 07:05:23.604190] I [MSGID: 106492] [glusterd-handler.c:2619:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: ec137af6-4845-4ebb-955a-fac1d19b7b6c
[2020-06-16 07:05:23.604223] I [MSGID: 106502] [glusterd-handler.c:2660:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2020-06-16 07:05:23.618982] I [MSGID: 106493] [glusterd-rpc-ops.c:681:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: ec137af6-4845-4ebb-955a-fac1d19b7b6c
[2020-06-16 07:05:24.304375] I [MSGID: 106499] [glusterd-handler.c:4264:__glusterd_handle_status_volume] 0-management: Received status volume req for volume glustervol
The message "I [MSGID: 106499] [glusterd-handler.c:4264:__glusterd_handle_status_volume] 0-management: Received status volume req for volume glustervol" repeated 12 times between [2020-06-16 07:05:24.304375] and [2020-06-16 07:05:43.392469]
[2020-06-16 07:36:01.748073] I [MSGID: 106496] [glusterd-handshake.c:935:__server_getspec] 0-management: Received mount request for volume glustervol.node1.data
node2
[2020-06-16 05:14:25.256799] I [addr.c:54:compare_addr_and_update] 0-/data: allowed = "*", received addr = "17.18.11.14"
[2020-06-16 05:14:25.256887] I [MSGID: 115029] [server-handshake.c:552:server_setvolume] 0-glustervol-server: accepted client from CTX_ID:ff0af68b-bc9a-4269-8589-f7ef60e27f5e-GRAPH_ID:0-PID:100753-HOST:node4-PC_NAME:glustervol-client-1-RECON_NO:-0 (version: 6.9) with subvol /data
[2020-06-16 05:16:42.944945] I [MSGID: 115036] [server.c:501:server_rpc_notify] 0-glustervol-server: disconnecting connection from CTX_ID:2ae6a66f-c58b-486f-8cee-02aaa1032f9b-GRAPH_ID:0-PID:1905-HOSTnode3-PC_NAME:glustervol-client-1-RECON_NO:-0
[2020-06-16 05:16:42.945403] I [MSGID: 101055] [client_t.c:436:gf_client_unref] 0-glustervol-server: Shutting down connection CTX_ID:2ae6a66f-c58b-486f-8cee-02aaa1032f9b-GRAPH_ID:0-PID:1905-HOSTnode3-PC_NAME:glustervol-client-1-RECON_NO:-0
[2020-06-16 05:16:46.002511] I [addr.c:54:compare_addr_and_update] 0-/data: allowed = "*", received addr = "49.11.99.79"
[2020-06-16 05:16:46.002586] I [login.c:110:gf_auth] 0-auth/login: allowed user names: 15e6dd85-fb90-4caf-971c-625ffec71229
[2020-06-16 05:16:46.002601] I [MSGID: 115029] [server-handshake.c:552:server_setvolume] 0-glustervol-server: accepted client from CTX_ID:9628bd0e-07f7-4fba-abee-fe10bdd87944-GRAPH_ID:0-PID:2316-HOSTnode3-PC_NAME:glustervol-client-1-RECON_NO:-0 (version: 7.5) with subvol /data
[2020-06-16 06:17:36.852967] I [MSGID: 115036] [server.c:501:server_rpc_notify] 0-glustervol-server: disconnecting connection from CTX_ID:9628bd0e-07f7-4fba-abee-fe10bdd87944-GRAPH_ID:0-PID:2316-HOSTnode3-PC_NAME:glustervol-client-1-RECON_NO:-0
[2020-06-16 06:17:36.853641] I [MSGID: 101055] [client_t.c:436:gf_client_unref] 0-glustervol-server: Shutting down connection CTX_ID:9628bd0e-07f7-4fba-abee-fe10bdd87944-GRAPH_ID:0-PID:2316-HOSTnode3-PC_NAME:glustervol-client-1-RECON_NO:-0
[2020-06-16 06:17:46.750732] I [addr.c:54:compare_addr_and_update] 0-/data: allowed = "*", received addr = "49.11.99.79"
[2020-06-16 06:17:46.750803] I [login.c:110:gf_auth] 0-auth/login: allowed user names: 15e6dd85-fb90-4caf-971c-625ffec71229
[2020-06-16 06:17:46.750823] I [MSGID: 115029] [server-handshake.c:552:server_setvolume] 0-glustervol-server: accepted client from CTX_ID:464d653b-1c3c-46e2-bfe0-d26c59860370-GRAPH_ID:0-PID:1894-HOSTnode3-PC_NAME:glustervol-client-1-RECON_NO:-0 (version: 7.5) with subvol /data
[2020-06-16 07:05:16.308861] I [MSGID: 115036] [server.c:501:server_rpc_notify] 0-glustervol-server: disconnecting connection from CTX_ID:464d653b-1c3c-46e2-bfe0-d26c59860370-GRAPH_ID:0-PID:1894-HOSTnode3-PC_NAME:glustervol-client-1-RECON_NO:-0
[2020-06-16 07:05:16.309386] I [MSGID: 101055] [client_t.c:436:gf_client_unref] 0-glustervol-server: Shutting down connection CTX_ID:464d653b-1c3c-46e2-bfe0-d26c59860370-GRAPH_ID:0-PID:1894-HOSTnode3-PC_NAME:glustervol-client-1-RECON_NO:-0
[2020-06-16 07:05:23.630782] I [addr.c:54:compare_addr_and_update] 0-/data: allowed = "*", received addr = "49.11.99.79"
[2020-06-16 07:05:23.630819] I [login.c:110:gf_auth] 0-auth/login: allowed user names: 15e6dd85-fb90-4caf-971c-625ffec71229
[2020-06-16 07:05:23.630830] I [MSGID: 115029] [server-handshake.c:552:server_setvolume] 0-glustervol-server: accepted client from CTX_ID:047b5329-a111-4e09-8fe2-ad632369e4db-GRAPH_ID:0-PID:1995-HOSTnode3-PC_NAME:glustervol-client-1-RECON_NO:-0 (version: 7.5) with subvol /data
[2020-06-16 05:16:06.513471] I [MSGID: 106004] [glusterd-handler.c:6204:__glusterd_peer_rpc_notify] 0-management: Peer <node3> (<ec137af6-4845-4ebb-955a-fac1d19b7b6c>), in state <Peer in Cluster>, has disconnected from glusterd.
[2020-06-16 05:16:06.513821] W [glusterd-locks.c:796:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x235fa) [0x7f505dca55fa] -->/usr/lib64/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2e300) [0x7f505dcb0300] -->/usr/lib64/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xe8773) [0x7f505dd6a773] ) 0-management: Lock for vol glustervol not held
[2020-06-16 05:16:06.513837] W [MSGID: 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management: Lock not released for glustervol
[2020-06-16 05:16:43.494612] I [MSGID: 106163] [glusterd-handshake.c:1433:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 70200
[2020-06-16 05:16:43.496649] I [MSGID: 106490] [glusterd-handler.c:2434:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: ec137af6-4845-4ebb-955a-fac1d19b7b6c
[2020-06-16 05:17:12.634860] I [MSGID: 106493] [glusterd-handler.c:3715:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to node3 (0), ret: 0, op_ret: 0
[2020-06-16 05:17:12.637964] I [MSGID: 106492] [glusterd-handler.c:2619:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: ec137af6-4845-4ebb-955a-fac1d19b7b6c
[2020-06-16 05:17:12.639215] I [MSGID: 106502] [glusterd-handler.c:2660:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2020-06-16 05:17:12.639364] I [MSGID: 106493] [glusterd-rpc-ops.c:681:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: ec137af6-4845-4ebb-955a-fac1d19b7b6c
[2020-06-16 05:17:12.640936] I [MSGID: 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding svc glustershd (volume=glustervol) to existing process with pid 3519
[2020-06-16 05:17:12.642498] I [MSGID: 106617] [glusterd-svc-helper.c:680:glusterd_svc_attach_cbk] 0-management: svc glustershd of volume glustervol attached successfully to pid 3519
[2020-06-16 05:17:12.645184] I [MSGID: 106493] [glusterd-rpc-ops.c:468:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: ec137af6-4845-4ebb-955a-fac1d19b7b6c, host: node3, port: 0
[2020-06-16 05:17:12.646423] I [MSGID: 106492] [glusterd-handler.c:2619:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: ec137af6-4845-4ebb-955a-fac1d19b7b6c
[2020-06-16 05:17:12.647482] I [MSGID: 106502] [glusterd-handler.c:2660:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2020-06-16 05:17:12.648617] I [MSGID: 106493] [glusterd-rpc-ops.c:681:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: ec137af6-4845-4ebb-955a-fac1d19b7b6c
[2020-06-16 06:17:08.354017] I [MSGID: 106004] [glusterd-handler.c:6204:__glusterd_peer_rpc_notify] 0-management: Peer <node3> (<ec137af6-4845-4ebb-955a-fac1d19b7b6c>), in state <Peer in Cluster>, has disconnected from glusterd.
[2020-06-16 06:17:08.354277] W [glusterd-locks.c:796:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x235fa) [0x7f505dca55fa] -->/usr/lib64/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2e300) [0x7f505dcb0300] -->/usr/lib64/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xe8773) [0x7f505dd6a773] ) 0-management: Lock for vol glustervol not held
[2020-06-16 06:17:08.354315] W [MSGID: 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management: Lock not released for glustervol
[2020-06-16 06:17:45.269437] I [MSGID: 106163] [glusterd-handshake.c:1433:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 70200
[2020-06-16 06:17:45.271726] I [MSGID: 106490] [glusterd-handler.c:2434:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: ec137af6-4845-4ebb-955a-fac1d19b7b6c
[2020-06-16 06:18:14.459655] I [MSGID: 106493] [glusterd-handler.c:3715:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to node3 (0), ret: 0, op_ret: 0
[2020-06-16 06:18:14.465003] I [MSGID: 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding svc glustershd (volume=glustervol) to existing process with pid 3519
[2020-06-16 06:18:14.465217] I [MSGID: 106492] [glusterd-handler.c:2619:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: ec137af6-4845-4ebb-955a-fac1d19b7b6c
[2020-06-16 06:18:14.466816] I [MSGID: 106502] [glusterd-handler.c:2660:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2020-06-16 06:18:14.467147] I [MSGID: 106617] [glusterd-svc-helper.c:680:glusterd_svc_attach_cbk] 0-management: svc glustershd of volume glustervol attached successfully to pid 3519
[2020-06-16 06:18:14.467178] I [MSGID: 106493] [glusterd-rpc-ops.c:681:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: ec137af6-4845-4ebb-955a-fac1d19b7b6c
[2020-06-16 06:18:14.470959] I [MSGID: 106493] [glusterd-rpc-ops.c:468:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: ec137af6-4845-4ebb-955a-fac1d19b7b6c, host: node3, port: 0
[2020-06-16 06:18:14.472070] I [MSGID: 106492] [glusterd-handler.c:2619:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: ec137af6-4845-4ebb-955a-fac1d19b7b6c
[2020-06-16 06:18:14.473112] I [MSGID: 106502] [glusterd-handler.c:2660:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2020-06-16 06:18:14.474433] I [MSGID: 106493] [glusterd-rpc-ops.c:681:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: ec137af6-4845-4ebb-955a-fac1d19b7b6c
[2020-06-16 06:54:47.673367] E [MSGID: 101097] [xlator.c:218:xlator_volopt_dynload] 0-xlator: dlsym(xlator_api) missing: /usr/lib64/glusterfs/7.5/rpc-transport/socket.so: undefined symbol: xlator_api
[2020-06-16 06:54:47.674198] W [MSGID: 101095] [xlator.c:210:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/7.5/xlator/nfs/server.so: cannot open shared object file: No such file or directory
[2020-06-16 06:54:47.682385] W [MSGID: 101095] [xlator.c:210:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/7.5/xlator/storage/bd.so: cannot open shared object file: No such file or directory
The message "E [MSGID: 101097] [xlator.c:218:xlator_volopt_dynload] 0-xlator: dlsym(xlator_api) missing: /usr/lib64/glusterfs/7.5/rpc-transport/socket.so: undefined symbol: xlator_api" repeated 7 times between [2020-06-16 06:54:47.673367] and [2020-06-16 06:54:47.673452]
The message "W [MSGID: 101095] [xlator.c:210:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/7.5/xlator/nfs/server.so: cannot open shared object file: No such file or directory" repeated 30 times between [2020-06-16 06:54:47.674198] and [2020-06-16 06:54:47.674399]
[2020-06-16 07:04:45.626740] I [MSGID: 106004] [glusterd-handler.c:6204:__glusterd_peer_rpc_notify] 0-management: Peer <node3> (<ec137af6-4845-4ebb-955a-fac1d19b7b6c>), in state <Peer in Cluster>, has disconnected from glusterd.
[2020-06-16 07:04:45.627101] W [glusterd-locks.c:796:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x235fa) [0x7f505dca55fa] -->/usr/lib64/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2e300) [0x7f505dcb0300] -->/usr/lib64/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xe8773) [0x7f505dd6a773] ) 0-management: Lock for vol glustervol not held
[2020-06-16 07:04:45.627151] W [MSGID: 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management: Lock not released for glustervol
[2020-06-16 07:05:21.900542] I [MSGID: 106163] [glusterd-handshake.c:1433:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 70200
[2020-06-16 07:05:21.906687] I [MSGID: 106490] [glusterd-handler.c:2434:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: ec137af6-4845-4ebb-955a-fac1d19b7b6c
[2020-06-16 07:05:22.567772] I [MSGID: 106493] [glusterd-handler.c:3715:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to node3 (0), ret: 0, op_ret: 0
[2020-06-16 07:05:22.570504] I [MSGID: 106492] [glusterd-handler.c:2619:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: ec137af6-4845-4ebb-955a-fac1d19b7b6c
[2020-06-16 07:05:22.571524] I [MSGID: 106502] [glusterd-handler.c:2660:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2020-06-16 07:05:23.607113] I [MSGID: 106493] [glusterd-rpc-ops.c:468:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: ec137af6-4845-4ebb-955a-fac1d19b7b6c, host: node3, port: 0
[2020-06-16 07:05:23.610618] I [MSGID: 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding svc glustershd (volume=glustervol) to existing process with pid 3519
[2020-06-16 07:05:23.611662] I [MSGID: 106617] [glusterd-svc-helper.c:680:glusterd_svc_attach_cbk] 0-management: svc glustershd of volume glustervol attached successfully to pid 3519
[2020-06-16 07:05:23.616907] I [MSGID: 106492] [glusterd-handler.c:2619:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: ec137af6-4845-4ebb-955a-fac1d19b7b6c
[2020-06-16 07:05:23.618607] I [MSGID: 106502] [glusterd-handler.c:2660:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2020-06-16 07:05:23.620369] I [MSGID: 106493] [glusterd-rpc-ops.c:681:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: ec137af6-4845-4ebb-955a-fac1d19b7b6c
[2020-06-16 05:14:25.259400] I [addr.c:54:compare_addr_and_update] 0-/data: allowed = "*", received addr = "17.18.11.14"
[2020-06-16 05:14:25.259457] I [MSGID: 115029] [server-handshake.c:552:server_setvolume] 0-glustervol-server: accepted client from CTX_ID:ff0af68b-bc9a-4269-8589-f7ef60e27f5e-GRAPH_ID:0-PID:100753-HOST:node4-PC_NAME:glustervol-client-2-RECON_NO:-0 (version: 6.9) with subvol /data
[2020-06-16 05:16:06.512996] W [socket.c:775:__socket_rwv] 0-glusterfs: readv on 49.11.99.79:24007 failed (No data available)
[2020-06-16 05:16:06.513058] I [glusterfsd-mgmt.c:2719:mgmt_rpc_notify] 0-glusterfsd-mgmt: disconnected from remote-host: node3
[2020-06-16 05:16:06.513065] I [glusterfsd-mgmt.c:2739:mgmt_rpc_notify] 0-glusterfsd-mgmt: Exhausted all volfile servers
[2020-06-16 05:16:07.250816] W [glusterfsd.c:1596:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7ea5) [0x7f86210a6ea5] -->/usr/sbin/glusterfsd(glusterfs_sigwaiter+0xe5) [0x55dd9c536625] -->/usr/sbin/glusterfsd(cleanup_and_exit+0x6b) [0x55dd9c53648b] ) 0-: received signum (15), shutting down
[2020-06-16 05:16:07.291970] E [socket.c:3636:socket_connect] 0-glusterfs: connection attempt on 49.11.99.79:24007 failed, (Network is unreachable)
[2020-06-16 05:16:07.292123] W [rpc-clnt.c:1698:rpc_clnt_submit] 0-glusterfs: error returned while attempting to connect to host:(null), port:0
[2020-06-16 05:16:07.292362] I [timer.c:86:gf_timer_call_cancel] (-->/lib64/libgfrpc.so.0(+0xf4e8) [0x7f862200d4e8] -->/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x6e) [0x7f862200c8fe] -->/lib64/libglusterfs.so.0(gf_timer_call_cancel+0x149) [0x7f8622272f79] ) 0-timer: ctx cleanup started
[2020-06-16 05:16:07.292431] E [timer.c:34:gf_timer_call_after] (-->/lib64/libgfrpc.so.0(rpc_transport_notify+0x23) [0x7f8622009a93] -->/lib64/libgfrpc.so.0(+0xf512) [0x7f862200d512] -->/lib64/libglusterfs.so.0(gf_timer_call_after+0x229) [0x7f8622272cb9] ) 0-timer: Either ctx is NULL or ctx cleanup started [Invalid argument]
[2020-06-16 05:16:07.292448] W [rpc-clnt.c:850:rpc_clnt_handle_disconnect] 0-glusterfs: Cannot create rpc_clnt_reconnect timer
[2020-06-16 05:16:45.105033] I [MSGID: 100030] [glusterfsd.c:2867:main] 0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version 7.5 (args: /usr/sbin/glusterfsd -s node3 --volfile-id glustervol.node3.data -p /var/run/gluster/vols/glustervol/node3-data.pid -S /var/run/gluster/21fe7411872148f7.socket --brick-name /data -l /var/log/glusterfs/bricks/data.log --xlator-option *-posix.glusterd-uuid=ec137af6-4845-4ebb-955a-fac1d19b7b6c --process-name brick --brick-port 49152 --xlator-option glustervol-server.listen-port=49152)
[2020-06-16 05:16:45.106276] I [glusterfsd.c:2594:daemonize] 0-glusterfs: Pid of current running process is 2304
[2020-06-16 05:16:45.111777] I [socket.c:958:__socket_server_bind] 0-socket.glusterfsd: closing (AF_UNIX) reuse check socket 9
[2020-06-16 05:16:45.116766] I [MSGID: 101190] [event-epoll.c:682:event_dispatch_epoll_worker] 0-epoll: Started thread with index 0
[2020-06-16 05:16:45.116830] I [MSGID: 101190] [event-epoll.c:682:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2020-06-16 05:16:46.159277] I [rpcsvc.c:2690:rpcsvc_set_outstanding_rpc_limit] 0-rpc-service: Configured rpc.outstanding-rpc-limit with value 64
[2020-06-16 05:16:46.167295] I [socket.c:958:__socket_server_bind] 0-socket.glustervol-changelog: closing (AF_UNIX) reuse check socket 14
[2020-06-16 05:16:46.167589] I [trash.c:2450:init] 0-glustervol-trash: no option specified for 'eliminate', using NULL
Final graph:
+------------------------------------------------------------------------------+
  1: volume glustervol-posix
  2:     type storage/posix
  3:     option glusterd-uuid ec137af6-4845-4ebb-955a-fac1d19b7b6c
  4:     option directory /data
  5:     option volume-id 5422bb27-1863-47d5-b216-61751a01b759
  6:     option fips-mode-rchecksum on
  7:     option shared-brick-count 1
  8:     option reserve-size 0
  9: end-volume
 10:
 11: volume glustervol-trash
 12:     type features/trash
 13:     option trash-dir .trashcan
 14:     option brick-path /data
 15:     option trash-internal-op off
 16:     subvolumes glustervol-posix
 17: end-volume
 18:
 19: volume glustervol-changelog
 20:     type features/changelog
 21:     option changelog-brick /data
 22:     option changelog-dir /data/.glusterfs/changelogs
 23:     option changelog-barrier-timeout 120
 24:     subvolumes glustervol-trash
 25: end-volume
 26:
 27: volume glustervol-bitrot-stub
 28:     type features/bitrot-stub
 29:     option export /data
 30:     option bitrot disable
 31:     subvolumes glustervol-changelog
 32: end-volume
 33:
 34: volume glustervol-access-control
 35:     type features/access-control
 36:     subvolumes glustervol-bitrot-stub
 37: end-volume
 38:
 39: volume glustervol-locks
 40:     type features/locks
 41:     option enforce-mandatory-lock off
 42:     subvolumes glustervol-access-control
 43: end-volume
 44:
 45: volume glustervol-worm
 46:     type features/worm
 47:     option worm off
 48:     option worm-file-level off
 49:     option worm-files-deletable on
 50:     subvolumes glustervol-locks
 51: end-volume
 52:
 53: volume glustervol-read-only
 54:     type features/read-only
 55:     option read-only off
 56:     subvolumes glustervol-worm
 57: end-volume
 58:
 59: volume glustervol-leases
 60:     type features/leases
 61:     option leases off
 62:     subvolumes glustervol-read-only
 63: end-volume
 64:
 65: volume glustervol-upcall
 66:     type features/upcall
 67:     option cache-invalidation off
 68:     subvolumes glustervol-leases
 69: end-volume
 70:
 71: volume glustervol-io-threads
 72:     type performance/io-threads
 73:     subvolumes glustervol-upcall
 74: end-volume
 75:
 76: volume glustervol-selinux
 77:     type features/selinux
 78:     option selinux on
 79:     subvolumes glustervol-io-threads
 80: end-volume
 81:
 82: volume glustervol-marker
 83:     type features/marker
 84:     option volume-uuid 5422bb27-1863-47d5-b216-61751a01b759
 85:     option timestamp-file /var/lib/glusterd/vols/glustervol/marker.tstamp
 86:     option quota-version 0
 87:     option xtime off
 88:     option gsync-force-xtime off
 89:     option quota off
 90:     option inode-quota off
 91:     subvolumes glustervol-selinux
 92: end-volume
 93:
 94: volume glustervol-barrier
 95:     type features/barrier
 96:     option barrier disable
 97:     option barrier-timeout 120
 98:     subvolumes glustervol-marker
 99: end-volume
100:
101: volume glustervol-index
102:     type features/index
103:     option index-base /data/.glusterfs/indices
104:     option xattrop-dirty-watchlist trusted.afr.dirty
105:     option xattrop-pending-watchlist trusted.afr.glustervol-
106:     subvolumes glustervol-barrier
107: end-volume
108:
109: volume glustervol-quota
110:     type features/quota
111:     option volume-uuid glustervol
112:     option server-quota off
113:     option deem-statfs off
114:     subvolumes glustervol-index
115: end-volume
116:
117: volume /data
118:     type debug/io-stats
119:     option auth.addr./data.allow *
120:     option auth-path /data
121:     option auth.login.15e6dd85-fb90-4caf-971c-625ffec71229.password 6e2f0194-c002-439f-b6c8-6b1d71af66b8
122:     option auth.login./data.allow 15e6dd85-fb90-4caf-971c-625ffec71229
123:     option unique-id /data
124:     option log-level INFO
125:     option threads 16
126:     option latency-measurement off
127:     option count-fop-hits off
128:     option global-threading off
129:     subvolumes glustervol-quota
130: end-volume
131:
132: volume glustervol-server
133:     type protocol/server
134:     option transport.socket.listen-port 49152
135:     option rpc-auth.auth-glusterfs on
136:     option rpc-auth.auth-unix on
137:     option rpc-auth.auth-null on
138:     option rpc-auth-allow-insecure on
139:     option transport-type tcp
140:     option transport.address-family inet
141:     option auth.login./data.allow 15e6dd85-fb90-4caf-971c-625ffec71229
142:     option auth.login.15e6dd85-fb90-4caf-971c-625ffec71229.password 6e2f0194-c002-439f-b6c8-6b1d71af66b8
143:     option auth-path /data
144:     option auth.addr./data.allow *
145:     option transport.socket.keepalive 1
146:     option transport.socket.ssl-enabled off
147:     option transport.socket.keepalive-time 20
148:     option transport.socket.keepalive-interval 2
149:     option transport.socket.keepalive-count 9
150:     option transport.listen-backlog 1024
151:     subvolumes /data
152: end-volume
153:
+------------------------------------------------------------------------------+
[2020-06-16 05:16:47.112007] I [addr.c:54:compare_addr_and_update] 0-/data: allowed = "*", received addr = "49.141.98.19"
[2020-06-16 05:16:47.112062] I [login.c:110:gf_auth] 0-auth/login: allowed user names: 15e6dd85-fb90-4caf-971c-625ffec71229
[2020-06-16 05:16:47.112077] I [MSGID: 115029] [server-handshake.c:552:server_setvolume] 0-glustervol-server: accepted client from CTX_ID:8a2e528e-502d-4150-a994-1e1d44e17946-GRAPH_ID:0-PID:59760-HOST:node1-PC_NAME:glustervol-client-2-RECON_NO:-2 (version: 7.5) with subvol /data
[2020-06-16 05:16:47.114981] I [rpcsvc.c:866:rpcsvc_handle_rpc_call] 0-rpc-service: spawned a request handler thread for queue 0
[2020-06-16 05:16:47.116788] I [addr.c:54:compare_addr_and_update] 0-/data: allowed = "*", received addr = "13.18.11.12"
[2020-06-16 05:16:47.116839] I [login.c:110:gf_auth] 0-auth/login: allowed user names: 15e6dd85-fb90-4caf-971c-625ffec71229
[2020-06-16 05:16:47.116850] I [MSGID: 115029] [server-handshake.c:552:server_setvolume] 0-glustervol-server: accepted client from CTX_ID:5e6c8d94-95f8-4c61-a411-8649caa0f04c-GRAPH_ID:0-PID:3519-HOST:node2-PC_NAME:glustervol-client-2-RECON_NO:-2 (version: 7.5) with subvol /data
[2020-06-16 05:16:49.165088] I [addr.c:54:compare_addr_and_update] 0-/data: allowed = "*", received addr = "49.11.99.79"
[2020-06-16 05:16:49.165137] I [login.c:110:gf_auth] 0-auth/login: allowed user names: 15e6dd85-fb90-4caf-971c-625ffec71229
[2020-06-16 05:16:49.165157] I [MSGID: 115029] [server-handshake.c:552:server_setvolume] 0-glustervol-server: accepted client from CTX_ID:9628bd0e-07f7-4fba-abee-fe10bdd87944-GRAPH_ID:0-PID:2316-HOST:node3-PC_NAME:glustervol-client-2-RECON_NO:-0 (version: 7.5) with subvol /data
[2020-06-16 05:16:49.166590] I [rpcsvc.c:866:rpcsvc_handle_rpc_call] 0-rpc-service: spawned a request handler thread for queue 1
[2020-06-16 05:17:10.406294] I [addr.c:54:compare_addr_and_update] 0-/data: allowed = "*", received addr = "17.18.11.14"
[2020-06-16 05:17:10.406348] I [MSGID: 115029] [server-handshake.c:552:server_setvolume] 0-glustervol-server: accepted client from CTX_ID:ff0af68b-bc9a-4269-8589-f7ef60e27f5e-GRAPH_ID:0-PID:100753-HOST:node4-PC_NAME:glustervol-client-2-RECON_NO:-1 (version: 6.9) with subvol /data
[2020-06-16 06:17:08.352198] W [socket.c:775:__socket_rwv] 0-glusterfs: readv on 49.11.99.79:24007 failed (No data available)
[2020-06-16 06:17:08.352252] I [glusterfsd-mgmt.c:2719:mgmt_rpc_notify] 0-glusterfsd-mgmt: disconnected from remote-host: node3
[2020-06-16 06:17:08.352260] I [glusterfsd-mgmt.c:2739:mgmt_rpc_notify] 0-glusterfsd-mgmt: Exhausted all volfile servers
[2020-06-16 06:17:09.128929] W [glusterfsd.c:1596:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7ea5) [0x7ff5125c0ea5] -->/usr/sbin/glusterfsd(glusterfs_sigwaiter+0xe5) [0x562d2c1f5625] -->/usr/sbin/glusterfsd(cleanup_and_exit+0x6b) [0x562d2c1f548b] ) 0-: received signum (15), shutting down
[2020-06-16 06:17:09.177491] E [socket.c:3636:socket_connect] 0-glusterfs: connection attempt on 49.11.99.79:24007 failed, (Network is unreachable)
[2020-06-16 06:17:09.177602] W [rpc-clnt.c:1698:rpc_clnt_submit] 0-glusterfs: error returned while attempting to connect to host:(null), port:0
[2020-06-16 06:17:09.177743] I [timer.c:86:gf_timer_call_cancel] (-->/lib64/libgfrpc.so.0(+0xf4e8) [0x7ff5135274e8] -->/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x6e) [0x7ff5135268fe] -->/lib64/libglusterfs.so.0(gf_timer_call_cancel+0x149) [0x7ff51378cf79] ) 0-timer: ctx cleanup started
[2020-06-16 06:17:09.177789] E [timer.c:34:gf_timer_call_after] (-->/lib64/libgfrpc.so.0(rpc_transport_notify+0x23) [0x7ff513523a93] -->/lib64/libgfrpc.so.0(+0xf512) [0x7ff513527512] -->/lib64/libglusterfs.so.0(gf_timer_call_after+0x229) [0x7ff51378ccb9] ) 0-timer: Either ctx is NULL or ctx cleanup started [Invalid argument]
[2020-06-16 06:17:09.177800] W [rpc-clnt.c:850:rpc_clnt_handle_disconnect] 0-glusterfs: Cannot create rpc_clnt_reconnect timer
[2020-06-16 06:17:46.088080] I [MSGID: 100030] [glusterfsd.c:2867:main] 0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version 7.5 (args: /usr/sbin/glusterfsd -s node3 --volfile-id glustervol.node3.data -p /var/run/gluster/vols/glustervol/node3-data.pid -S /var/run/gluster/21fe7411872148f7.socket --brick-name /data -l /var/log/glusterfs/bricks/data.log --xlator-option *-posix.glusterd-uuid=ec137af6-4845-4ebb-955a-fac1d19b7b6c --process-name brick --brick-port 49152 --xlator-option glustervol-server.listen-port=49152)
[2020-06-16 06:17:46.089241] I [glusterfsd.c:2594:daemonize] 0-glusterfs: Pid of current running process is 1885
[2020-06-16 06:17:46.099072] I [socket.c:958:__socket_server_bind] 0-socket.glusterfsd: closing (AF_UNIX) reuse check socket 9
[2020-06-16 06:17:46.104753] I [MSGID: 101190] [event-epoll.c:682:event_dispatch_epoll_worker] 0-epoll: Started thread with index 0
[2020-06-16 06:17:46.104804] I [MSGID: 101190] [event-epoll.c:682:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2020-06-16 06:17:47.152209] I [rpcsvc.c:2690:rpcsvc_set_outstanding_rpc_limit] 0-rpc-service: Configured rpc.outstanding-rpc-limit with value 64
[2020-06-16 06:17:47.160983] I [socket.c:958:__socket_server_bind] 0-socket.glustervol-changelog: closing (AF_UNIX) reuse check socket 14
[2020-06-16 06:17:47.161238] I [trash.c:2450:init] 0-glustervol-trash: no option specified for 'eliminate', using NULL
Final graph:
+------------------------------------------------------------------------------+
  1: volume glustervol-posix
  2:     type storage/posix
  3:     option glusterd-uuid ec137af6-4845-4ebb-955a-fac1d19b7b6c
  4:     option directory /data
  5:     option volume-id 5422bb27-1863-47d5-b216-61751a01b759
  6:     option fips-mode-rchecksum on
  7:     option shared-brick-count 1
  8:     option reserve-size 0
  9: end-volume
 10:
 11: volume glustervol-trash
 12:     type features/trash
 13:     option trash-dir .trashcan
 14:     option brick-path /data
 15:     option trash-internal-op off
 16:     subvolumes glustervol-posix
 17: end-volume
 18:
 19: volume glustervol-changelog
 20:     type features/changelog
 21:     option changelog-brick /data
 22:     option changelog-dir /data/.glusterfs/changelogs
 23:     option changelog-barrier-timeout 120
 24:     subvolumes glustervol-trash
 25: end-volume
 26:
 27: volume glustervol-bitrot-stub
 28:     type features/bitrot-stub
 29:     option export /data
 30:     option bitrot disable
 31:     subvolumes glustervol-changelog
 32: end-volume
 33:
 34: volume glustervol-access-control
 35:     type features/access-control
 36:     subvolumes glustervol-bitrot-stub
 37: end-volume
 38:
 39: volume glustervol-locks
 40:     type features/locks
 41:     option enforce-mandatory-lock off
 42:     subvolumes glustervol-access-control
 43: end-volume
 44:
 45: volume glustervol-worm
 46:     type features/worm
 47:     option worm off
 48:     option worm-file-level off
 49:     option worm-files-deletable on
 50:     subvolumes glustervol-locks
 51: end-volume
 52:
 53: volume glustervol-read-only
 54:     type features/read-only
 55:     option read-only off
 56:     subvolumes glustervol-worm
 57: end-volume
 58:
 59: volume glustervol-leases
 60:     type features/leases
 61:     option leases off
 62:     subvolumes glustervol-read-only
 63: end-volume
 64:
 65: volume glustervol-upcall
 66:     type features/upcall
 67:     option cache-invalidation off
 68:     subvolumes glustervol-leases
 69: end-volume
 70:
 71: volume glustervol-io-threads
 72:     type performance/io-threads
 73:     subvolumes glustervol-upcall
 74: end-volume
 75:
 76: volume glustervol-selinux
 77:     type features/selinux
 78:     option selinux on
 79:     subvolumes glustervol-io-threads
 80: end-volume
 81:
 82: volume glustervol-marker
 83:     type features/marker
 84:     option volume-uuid 5422bb27-1863-47d5-b216-61751a01b759
 85:     option timestamp-file /var/lib/glusterd/vols/glustervol/marker.tstamp
 86:     option quota-version 0
 87:     option xtime off
 88:     option gsync-force-xtime off
 89:     option quota off
 90:     option inode-quota off
 91:     subvolumes glustervol-selinux
 92: end-volume
 93:
 94: volume glustervol-barrier
 95:     type features/barrier
 96:     option barrier disable
 97:     option barrier-timeout 120
 98:     subvolumes glustervol-marker
 99: end-volume
100:
101: volume glustervol-index
102:     type features/index
103:     option index-base /data/.glusterfs/indices
104:     option xattrop-dirty-watchlist trusted.afr.dirty
105:     option xattrop-pending-watchlist trusted.afr.glustervol-
106:     subvolumes glustervol-barrier
107: end-volume
108:
109: volume glustervol-quota
110:     type features/quota
111:     option volume-uuid glustervol
112:     option server-quota off
113:     option deem-statfs off
114:     subvolumes glustervol-index
115: end-volume
116:
117: volume /data
118:     type debug/io-stats
119:     option auth.addr./data.allow *
120:     option auth-path /data
121:     option auth.login.15e6dd85-fb90-4caf-971c-625ffec71229.password 6e2f0194-c002-439f-b6c8-6b1d71af66b8
122:     option auth.login./data.allow 15e6dd85-fb90-4caf-971c-625ffec71229
123:     option unique-id /data
124:     option log-level INFO
125:     option threads 16
126:     option latency-measurement off
127:     option count-fop-hits off
128:     option global-threading off
129:     subvolumes glustervol-quota
130: end-volume
131:
132: volume glustervol-server
133:     type protocol/server
134:     option transport.socket.listen-port 49152
135:     option rpc-auth.auth-glusterfs on
136:     option rpc-auth.auth-unix on
137:     option rpc-auth.auth-null on
138:     option rpc-auth-allow-insecure on
139:     option transport-type tcp
140:     option transport.address-family inet
141:     option auth.login./data.allow 15e6dd85-fb90-4caf-971c-625ffec71229
142:     option auth.login.15e6dd85-fb90-4caf-971c-625ffec71229.password 6e2f0194-c002-439f-b6c8-6b1d71af66b8
143:     option auth-path /data
144:     option auth.addr./data.allow *
145:     option transport.socket.keepalive 1
146:     option transport.socket.ssl-enabled off
147:     option transport.socket.keepalive-time 20
148:     option transport.socket.keepalive-interval 2
149:     option transport.socket.keepalive-count 9
150:     option transport.listen-backlog 1024
151:     subvolumes /data
152: end-volume
153:
+------------------------------------------------------------------------------+
[2020-06-16 06:17:49.097093] I [addr.c:54:compare_addr_and_update] 0-/data: allowed = "*", received addr = "49.141.98.19"
[2020-06-16 06:17:49.097164] I [login.c:110:gf_auth] 0-auth/login: allowed user names: 15e6dd85-fb90-4caf-971c-625ffec71229
[2020-06-16 06:17:49.097191] I [MSGID: 115029] [server-handshake.c:552:server_setvolume] 0-glustervol-server: accepted client from CTX_ID:8a2e528e-502d-4150-a994-1e1d44e17946-GRAPH_ID:0-PID:59760-HOST:node1-PC_NAME:glustervol-client-2-RECON_NO:-3 (version: 7.5) with subvol /data
[2020-06-16 06:17:49.100424] I [rpcsvc.c:866:rpcsvc_handle_rpc_call] 0-rpc-service: spawned a request handler thread for queue 0
[2020-06-16 06:17:49.233948] I [addr.c:54:compare_addr_and_update] 0-/data: allowed = "*", received addr = "13.18.11.12"
[2020-06-16 06:17:49.234002] I [login.c:110:gf_auth] 0-auth/login: allowed user names: 15e6dd85-fb90-4caf-971c-625ffec71229
[2020-06-16 06:17:49.234014] I [MSGID: 115029] [server-handshake.c:552:server_setvolume] 0-glustervol-server: accepted client from CTX_ID:5e6c8d94-95f8-4c61-a411-8649caa0f04c-GRAPH_ID:0-PID:3519-HOST:node2-PC_NAME:glustervol-client-2-RECON_NO:-3 (version: 7.5) with subvol /data
[2020-06-16 06:17:49.235248] I [rpcsvc.c:866:rpcsvc_handle_rpc_call] 0-rpc-service: spawned a request handler thread for queue 1
[2020-06-16 06:17:50.146556] I [addr.c:54:compare_addr_and_update] 0-/data: allowed = "*", received addr = "49.11.99.79"
[2020-06-16 06:17:50.146584] I [login.c:110:gf_auth] 0-auth/login: allowed user names: 15e6dd85-fb90-4caf-971c-625ffec71229
[2020-06-16 06:17:50.146594] I [MSGID: 115029] [server-handshake.c:552:server_setvolume] 0-glustervol-server: accepted client from CTX_ID:464d653b-1c3c-46e2-bfe0-d26c59860370-GRAPH_ID:0-PID:1894-HOST:node3-PC_NAME:glustervol-client-2-RECON_NO:-0 (version: 7.5) with subvol /data
[2020-06-16 06:18:04.226614] I [addr.c:54:compare_addr_and_update] 0-/data: allowed = "*", received addr = "17.18.11.14"
[2020-06-16 06:18:04.226674] I [MSGID: 115029] [server-handshake.c:552:server_setvolume] 0-glustervol-server: accepted client from CTX_ID:ff0af68b-bc9a-4269-8589-f7ef60e27f5e-GRAPH_ID:0-PID:100753-HOST:node4-PC_NAME:glustervol-client-2-RECON_NO:-2 (version: 6.9) with subvol /data
[2020-06-16 07:04:45.629024] W [socket.c:775:__socket_rwv] 0-glusterfs: readv on 49.11.99.79:24007 failed (No data available)
[2020-06-16 07:04:45.629121] I [glusterfsd-mgmt.c:2719:mgmt_rpc_notify] 0-glusterfsd-mgmt: disconnected from remote-host: node3
[2020-06-16 07:04:45.629129] I [glusterfsd-mgmt.c:2739:mgmt_rpc_notify] 0-glusterfsd-mgmt: Exhausted all volfile servers
[2020-06-16 07:04:46.396489] W [glusterfsd.c:1596:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7ea5) [0x7fcd9d6afea5] -->/usr/sbin/glusterfsd(glusterfs_sigwaiter+0xe5) [0x56175628a625] -->/usr/sbin/glusterfsd(cleanup_and_exit+0x6b) [0x56175628a48b] ) 0-: received signum (15), shutting down
[2020-06-16 07:04:46.439188] E [socket.c:3636:socket_connect] 0-glusterfs: connection attempt on 49.11.99.79:24007 failed, (Network is unreachable)
[2020-06-16 07:04:46.439286] W [rpc-clnt.c:1698:rpc_clnt_submit] 0-glusterfs: error returned while attempting to connect to host:(null), port:0
[2020-06-16 07:04:46.439444] I [timer.c:86:gf_timer_call_cancel] (-->/lib64/libgfrpc.so.0(+0xf4e8) [0x7fcd9e6164e8] -->/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x6e) [0x7fcd9e6158fe] -->/lib64/libglusterfs.so.0(gf_timer_call_cancel+0x149) [0x7fcd9e87bf79] ) 0-timer: ctx cleanup started
[2020-06-16 07:04:46.439489] E [timer.c:34:gf_timer_call_after] (-->/lib64/libgfrpc.so.0(rpc_transport_notify+0x23) [0x7fcd9e612a93] -->/lib64/libgfrpc.so.0(+0xf512) [0x7fcd9e616512] -->/lib64/libglusterfs.so.0(gf_timer_call_after+0x229) [0x7fcd9e87bcb9] ) 0-timer: Either ctx is NULL or ctx cleanup started [Invalid argument]
[2020-06-16 07:04:46.439500] W [rpc-clnt.c:850:rpc_clnt_handle_disconnect] 0-glusterfs: Cannot create rpc_clnt_reconnect timer
[2020-06-16 07:05:22.086336] I [MSGID: 100030] [glusterfsd.c:2867:main] 0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version 7.5 (args: /usr/sbin/glusterfsd -s node3 --volfile-id glustervol.node3.data -p /var/run/gluster/vols/glustervol/node3-data.pid -S /var/run/gluster/21fe7411872148f7.socket --brick-name /data -l /var/log/glusterfs/bricks/data.log --xlator-option *-posix.glusterd-uuid=ec137af6-4845-4ebb-955a-fac1d19b7b6c --process-name brick --brick-port 49152 --xlator-option glustervol-server.listen-port=49152)
[2020-06-16 07:05:22.087644] I [glusterfsd.c:2594:daemonize] 0-glusterfs: Pid of current running process is 1973
[2020-06-16 07:05:22.093003] I [socket.c:958:__socket_server_bind] 0-socket.glusterfsd: closing (AF_UNIX) reuse check socket 9
[2020-06-16 07:05:22.100197] I [MSGID: 101190] [event-epoll.c:682:event_dispatch_epoll_worker] 0-epoll: Started thread with index 0
[2020-06-16 07:05:22.100282] I [MSGID: 101190] [event-epoll.c:682:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2020-06-16 07:05:23.143298] I [rpcsvc.c:2690:rpcsvc_set_outstanding_rpc_limit] 0-rpc-service: Configured rpc.outstanding-rpc-limit with value 64
[2020-06-16 07:05:23.149723] I [socket.c:958:__socket_server_bind] 0-socket.glustervol-changelog: closing (AF_UNIX) reuse check socket 14
[2020-06-16 07:05:23.149998] I [trash.c:2450:init] 0-glustervol-trash: no option specified for 'eliminate', using NULL
Final graph:
+------------------------------------------------------------------------------+
  1: volume glustervol-posix
  2:     type storage/posix
  3:     option glusterd-uuid ec137af6-4845-4ebb-955a-fac1d19b7b6c
  4:     option directory /data
  5:     option volume-id 5422bb27-1863-47d5-b216-61751a01b759
  6:     option fips-mode-rchecksum on
  7:     option shared-brick-count 1
  8:     option reserve-size 0
  9: end-volume
 10:
 11: volume glustervol-trash
 12:     type features/trash
 13:     option trash-dir .trashcan
 14:     option brick-path /data
 15:     option trash-internal-op off
 16:     subvolumes glustervol-posix
 17: end-volume
 18:
 19: volume glustervol-changelog
 20:     type features/changelog
 21:     option changelog-brick /data
 22:     option changelog-dir /data/.glusterfs/changelogs
 23:     option changelog-barrier-timeout 120
 24:     subvolumes glustervol-trash
 25: end-volume
 26:
 27: volume glustervol-bitrot-stub
 28:     type features/bitrot-stub
 29:     option export /data
 30:     option bitrot disable
 31:     subvolumes glustervol-changelog
 32: end-volume
 33:
 34: volume glustervol-access-control
 35:     type features/access-control
 36:     subvolumes glustervol-bitrot-stub
 37: end-volume
 38:
 39: volume glustervol-locks
 40:     type features/locks
 41:     option enforce-mandatory-lock off
 42:     subvolumes glustervol-access-control
 43: end-volume
 44:
 45: volume glustervol-worm
 46:     type features/worm
 47:     option worm off
 48:     option worm-file-level off
 49:     option worm-files-deletable on
 50:     subvolumes glustervol-locks
 51: end-volume
 52:
 53: volume glustervol-read-only
 54:     type features/read-only
 55:     option read-only off
 56:     subvolumes glustervol-worm
 57: end-volume
 58:
 59: volume glustervol-leases
 60:     type features/leases
 61:     option leases off
 62:     subvolumes glustervol-read-only
 63: end-volume
 64:
 65: volume glustervol-upcall
 66:     type features/upcall
 67:     option cache-invalidation off
 68:     subvolumes glustervol-leases
 69: end-volume
 70:
 71: volume glustervol-io-threads
 72:     type performance/io-threads
 73:     subvolumes glustervol-upcall
 74: end-volume
 75:
 76: volume glustervol-selinux
 77:     type features/selinux
 78:     option selinux on
 79:     subvolumes glustervol-io-threads
 80: end-volume
 81:
 82: volume glustervol-marker
 83:     type features/marker
 84:     option volume-uuid 5422bb27-1863-47d5-b216-61751a01b759
 85:     option timestamp-file /var/lib/glusterd/vols/glustervol/marker.tstamp
 86:     option quota-version 0
 87:     option xtime off
 88:     option gsync-force-xtime off
 89:     option quota off
 90:     option inode-quota off
 91:     subvolumes glustervol-selinux
 92: end-volume
 93:
 94: volume glustervol-barrier
 95:     type features/barrier
 96:     option barrier disable
 97:     option barrier-timeout 120
 98:     subvolumes glustervol-marker
 99: end-volume
100:
101: volume glustervol-index
102:     type features/index
103:     option index-base /data/.glusterfs/indices
104:     option xattrop-dirty-watchlist trusted.afr.dirty
105:     option xattrop-pending-watchlist trusted.afr.glustervol-
106:     subvolumes glustervol-barrier
107: end-volume
108:
109: volume glustervol-quota
110:     type features/quota
111:     option volume-uuid glustervol
112:     option server-quota off
113:     option deem-statfs off
114:     subvolumes glustervol-index
115: end-volume
116:
117: volume /data
118:     type debug/io-stats
119:     option auth.addr./data.allow *
120:     option auth-path /data
121:     option auth.login.15e6dd85-fb90-4caf-971c-625ffec71229.password 6e2f0194-c002-439f-b6c8-6b1d71af66b8
122:     option auth.login./data.allow 15e6dd85-fb90-4caf-971c-625ffec71229
123:     option unique-id /data
124:     option log-level INFO
125:     option threads 16
126:     option latency-measurement off
127:     option count-fop-hits off
128:     option global-threading off
129:     subvolumes glustervol-quota
130: end-volume
131:
132: volume glustervol-server
133:     type protocol/server
134:     option transport.socket.listen-port 49152
135:     option rpc-auth.auth-glusterfs on
136:     option rpc-auth.auth-unix on
137:     option rpc-auth.auth-null on
138:     option rpc-auth-allow-insecure on
139:     option transport-type tcp
140:     option transport.address-family inet
141:     option auth.login./data.allow 15e6dd85-fb90-4caf-971c-625ffec71229
142:     option auth.login.15e6dd85-fb90-4caf-971c-625ffec71229.password 6e2f0194-c002-439f-b6c8-6b1d71af66b8
143:     option auth-path /data
144:     option auth.addr./data.allow *
145:     option transport.socket.keepalive 1
146:     option transport.socket.ssl-enabled off
147:     option transport.socket.keepalive-time 20
148:     option transport.socket.keepalive-interval 2
149:     option transport.socket.keepalive-count 9
150:     option transport.listen-backlog 1024
151:     subvolumes /data
152: end-volume
153:
+------------------------------------------------------------------------------+
[2020-06-16 07:05:23.817880] I [addr.c:54:compare_addr_and_update] 0-/data: allowed = "*", received addr = "49.141.98.19"
[2020-06-16 07:05:23.817978] I [login.c:110:gf_auth] 0-auth/login: allowed user names: 15e6dd85-fb90-4caf-971c-625ffec71229
[2020-06-16 07:05:23.818006] I [MSGID: 115029] [server-handshake.c:552:server_setvolume] 0-glustervol-server: accepted client from CTX_ID:8a2e528e-502d-4150-a994-1e1d44e17946-GRAPH_ID:0-PID:59760-HOST:node1-PC_NAME:glustervol-client-2-RECON_NO:-4 (version: 7.5) with subvol /data
[2020-06-16 07:05:23.821164] I [rpcsvc.c:866:rpcsvc_handle_rpc_call] 0-rpc-service: spawned a request handler thread for queue 0
[2020-06-16 07:05:23.950052] I [addr.c:54:compare_addr_and_update] 0-/data: allowed = "*", received addr = "13.18.11.12"
[2020-06-16 07:05:23.950123] I [login.c:110:gf_auth] 0-auth/login: allowed user names: 15e6dd85-fb90-4caf-971c-625ffec71229
[2020-06-16 07:05:23.950143] I [MSGID: 115029] [server-handshake.c:552:server_setvolume] 0-glustervol-server: accepted client from CTX_ID:5e6c8d94-95f8-4c61-a411-8649caa0f04c-GRAPH_ID:0-PID:3519-HOST:node2-PC_NAME:glustervol-client-2-RECON_NO:-4 (version: 7.5) with subvol /data
[2020-06-16 07:05:23.951663] I [rpcsvc.c:866:rpcsvc_handle_rpc_call] 0-rpc-service: spawned a request handler thread for queue 1
[2020-06-16 07:05:26.149895] I [addr.c:54:compare_addr_and_update] 0-/data: allowed = "*", received addr = "49.11.99.79"
[2020-06-16 07:05:26.149932] I [login.c:110:gf_auth] 0-auth/login: allowed user names: 15e6dd85-fb90-4caf-971c-625ffec71229
[2020-06-16 07:05:26.149944] I [MSGID: 115029] [server-handshake.c:552:server_setvolume] 0-glustervol-server: accepted client from CTX_ID:047b5329-a111-4e09-8fe2-ad632369e4db-GRAPH_ID:0-PID:1995-HOST:node3-PC_NAME:glustervol-client-2-RECON_NO:-0 (version: 7.5) with subvol /data
[2020-06-16 07:05:28.816629] I [addr.c:54:compare_addr_and_update] 0-/data: allowed = "*", received addr = "17.18.11.14"
[2020-06-16 07:05:28.816698] I [MSGID: 115029] [server-handshake.c:552:server_setvolume] 0-glustervol-server: accepted client from CTX_ID:ff0af68b-bc9a-4269-8589-f7ef60e27f5e-GRAPH_ID:0-PID:100753-HOST:node4-PC_NAME:glustervol-client-2-RECON_NO:-3 (version: 6.9) with subvol /data
[2020-06-16 05:03:25.515535] E [MSGID: 101097] [xlator.c:218:xlator_volopt_dynload] 0-xlator: dlsym(xlator_api) missing: /usr/lib64/glusterfs/7.5/rpc-transport/socket.so: undefined symbol: xlator_api
[2020-06-16 05:03:25.516779] W [MSGID: 101095] [xlator.c:210:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/7.5/xlator/nfs/server.so: cannot open shared object file: No such file or directory
[2020-06-16 05:03:25.532082] W [MSGID: 101095] [xlator.c:210:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/7.5/xlator/storage/bd.so: cannot open shared object file: No such file or directory
[2020-06-16 05:03:36.466702] I [MSGID: 106487] [glusterd-handler.c:1339:__glusterd_handle_cli_list_friends] 0-glusterd: Received cli list req
The message "E [MSGID: 101097] [xlator.c:218:xlator_volopt_dynload] 0-xlator: dlsym(xlator_api) missing: /usr/lib64/glusterfs/7.5/rpc-transport/socket.so: undefined symbol: xlator_api" repeated 7 times between [2020-06-16 05:03:25.515535] and [2020-06-16 05:03:25.515675]
The message "W [MSGID: 101095] [xlator.c:210:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/7.5/xlator/nfs/server.so: cannot open shared object file: No such file or directory" repeated 30 times between [2020-06-16 05:03:25.516779] and [2020-06-16 05:03:25.517106]
[2020-06-16 05:06:31.574457] E [MSGID: 101097] [xlator.c:218:xlator_volopt_dynload] 0-xlator: dlsym(xlator_api) missing: /usr/lib64/glusterfs/7.5/rpc-transport/socket.so: undefined symbol: xlator_api
[2020-06-16 05:06:31.575283] W [MSGID: 101095] [xlator.c:210:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/7.5/xlator/nfs/server.so: cannot open shared object file: No such file or directory
[2020-06-16 05:06:31.582484] W [MSGID: 101095] [xlator.c:210:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/7.5/xlator/storage/bd.so: cannot open shared object file: No such file or directory
[2020-06-16 05:07:01.089815] I [MSGID: 106487] [glusterd-handler.c:1339:__glusterd_handle_cli_list_friends] 0-glusterd: Received cli list req
The message "E [MSGID: 101097] [xlator.c:218:xlator_volopt_dynload] 0-xlator: dlsym(xlator_api) missing: /usr/lib64/glusterfs/7.5/rpc-transport/socket.so: undefined symbol: xlator_api" repeated 7 times between [2020-06-16 05:06:31.574457] and [2020-06-16 05:06:31.574532]
The message "W [MSGID: 101095] [xlator.c:210:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/7.5/xlator/nfs/server.so: cannot open shared object file: No such file or directory" repeated 30 times between [2020-06-16 05:06:31.575283] and [2020-06-16 05:06:31.575478]
[2020-06-16 05:08:43.657674] I [MSGID: 106487] [glusterd-handler.c:1339:__glusterd_handle_cli_list_friends] 0-glusterd: Received cli list req
[2020-06-16 05:09:57.888545] I [MSGID: 106488] [glusterd-handler.c:1400:__glusterd_handle_cli_get_volume] 0-management: Received get vol req
The message "I [MSGID: 106488] [glusterd-handler.c:1400:__glusterd_handle_cli_get_volume] 0-management: Received get vol req" repeated 2 times between [2020-06-16 05:09:57.888545] and [2020-06-16 05:10:07.846460]
[2020-06-16 05:11:09.233188] I [MSGID: 106499] [glusterd-handler.c:4264:__glusterd_handle_status_volume] 0-management: Received status volume req for volume glustervol
[2020-06-16 05:16:06.510649] W [glusterfsd.c:1596:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7ea5) [0x7f146bac4ea5] -->/usr/sbin/glusterd(glusterfs_sigwaiter+0xe5) [0x556bf7e5d625] -->/usr/sbin/glusterd(cleanup_and_exit+0x6b) [0x556bf7e5d48b] ) 0-: received signum (15), shutting down
[2020-06-16 05:16:43.438233] I [MSGID: 100030] [glusterfsd.c:2867:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 7.5 (args: /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO)
[2020-06-16 05:16:43.440344] I [glusterfsd.c:2594:daemonize] 0-glusterfs: Pid of current running process is 1606
[2020-06-16 05:16:43.471253] I [MSGID: 106478] [glusterd.c:1426:init] 0-management: Maximum allowed open file descriptors set to 65536
[2020-06-16 05:16:43.471309] I [MSGID: 106479] [glusterd.c:1482:init] 0-management: Using /var/lib/glusterd as working directory
[2020-06-16 05:16:43.471319] I [MSGID: 106479] [glusterd.c:1488:init] 0-management: Using /var/run/gluster as pid file working directory
[2020-06-16 05:16:43.482684] I [socket.c:1015:__socket_server_bind] 0-socket.management: process started listening on port (24007)
[2020-06-16 05:16:43.482850] E [rpc-transport.c:300:rpc_transport_load] 0-rpc-transport: /usr/lib64/glusterfs/7.5/rpc-transport/rdma.so: cannot open shared object file: No such file or directory
[2020-06-16 05:16:43.482859] W [rpc-transport.c:304:rpc_transport_load] 0-rpc-transport: volume 'rdma.management': transport-type 'rdma' is not valid or not found on this machine
[2020-06-16 05:16:43.482897] W [rpcsvc.c:1981:rpcsvc_create_listener] 0-rpc-service: cannot create listener, initing the transport failed
[2020-06-16 05:16:43.482904] E [MSGID: 106244] [glusterd.c:1781:init] 0-management: creation of 1 listeners failed, continuing with succeeded transport
[2020-06-16 05:16:43.485905] I [socket.c:958:__socket_server_bind] 0-socket.management: closing (AF_UNIX) reuse check socket 12
[2020-06-16 05:16:43.488151] I [MSGID: 106059] [glusterd.c:1865:init] 0-management: max-port override: 60999
[2020-06-16 05:16:43.505101] I [MSGID: 106228] [glusterd.c:484:glusterd_check_gsync_present] 0-glusterd: geo-replication module not installed in the system [No such file or directory]
[2020-06-16 05:16:43.507606] I [MSGID: 106513] [glusterd-store.c:2257:glusterd_restore_op_version] 0-glusterd: retrieved op-version: 70200
[2020-06-16 05:16:43.510628] W [MSGID: 106204] [glusterd-store.c:3275:glusterd_store_update_volinfo] 0-management: Unknown key: tier-enabled
[2020-06-16 05:16:43.510770] W [MSGID: 106204] [glusterd-store.c:3275:glusterd_store_update_volinfo] 0-management: Unknown key: brick-0
[2020-06-16 05:16:43.510800] W [MSGID: 106204] [glusterd-store.c:3275:glusterd_store_update_volinfo] 0-management: Unknown key: brick-1
[2020-06-16 05:16:43.510821] W [MSGID: 106204] [glusterd-store.c:3275:glusterd_store_update_volinfo] 0-management: Unknown key: brick-2
[2020-06-16 05:16:43.512454] I [MSGID: 106544] [glusterd.c:152:glusterd_uuid_init] 0-management: retrieved UUID: ec137af6-4845-4ebb-955a-fac1d19b7b6c
[2020-06-16 05:16:43.568105] I [MSGID: 106498] [glusterd-handler.c:3519:glusterd_friend_add_from_peerinfo] 0-management: connect returned 0
[2020-06-16 05:16:43.568547] I [MSGID: 106498] [glusterd-handler.c:3519:glusterd_friend_add_from_peerinfo] 0-management: connect returned 0
[2020-06-16 05:16:43.568620] W [MSGID: 106061] [glusterd-handler.c:3315:glusterd_transport_inet_options_build] 0-glusterd: Failed to get tcp-user-timeout
[2020-06-16 05:16:43.568655] I [rpc-clnt.c:1014:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2020-06-16 05:16:43.646793] I [rpc-clnt.c:1014:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
Final graph:
+------------------------------------------------------------------------------+
  1: volume management
  2:     type mgmt/glusterd
  3:     option rpc-auth.auth-glusterfs on
  4:     option rpc-auth.auth-unix on
  5:     option rpc-auth.auth-null on
  6:     option rpc-auth-allow-insecure on
  7:     option transport.listen-backlog 1024
  8:     option max-port 60999
  9:     option event-threads 1
 10:     option ping-timeout 0
 11:     option transport.rdma.listen-port 24008
 12:     option transport.socket.listen-port 24007
 13:     option transport.socket.read-fail-log off
 14:     option transport.socket.keepalive-interval 2
 15:     option transport.socket.keepalive-time 10
 16:     option transport-type rdma
 17:     option working-directory /var/lib/glusterd
 18: end-volume
 19:
+------------------------------------------------------------------------------+
[2020-06-16 05:16:43.646779] W [MSGID: 106061] [glusterd-handler.c:3315:glusterd_transport_inet_options_build] 0-glusterd: Failed to get tcp-user-timeout
[2020-06-16 05:16:43.650579] I [MSGID: 101190] [event-epoll.c:682:event_dispatch_epoll_worker] 0-epoll: Started thread with index 0
[2020-06-16 05:16:45.097877] I [MSGID: 106493] [glusterd-rpc-ops.c:468:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: 0e679115-15ad-4a8a-9d0a-91b48471ef90, host: node1, port: 0
[2020-06-16 05:16:45.099628] I [glusterd-utils.c:6582:glusterd_brick_start] 0-management: starting a fresh brick process for brick /data
[2020-06-16 05:16:45.102344] I [rpc-clnt.c:1014:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2020-06-16 05:16:45.109746] I [rpc-clnt.c:1014:rpc_clnt_connection_init] 0-nfs: setting frame-timeout to 600
[2020-06-16 05:16:45.109893] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: nfs already stopped
[2020-06-16 05:16:45.109926] I [MSGID: 106568] [glusterd-svc-mgmt.c:265:glusterd_svc_stop] 0-management: nfs service is stopped
[2020-06-16 05:16:45.109958] I [MSGID: 106599] [glusterd-nfs-svc.c:81:glusterd_nfssvc_manager] 0-management: nfs/server.so xlator is not installed
[2020-06-16 05:16:45.110004] I [rpc-clnt.c:1014:rpc_clnt_connection_init] 0-quotad: setting frame-timeout to 600
[2020-06-16 05:16:45.110141] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: quotad already stopped
[2020-06-16 05:16:45.110154] I [MSGID: 106568] [glusterd-svc-mgmt.c:265:glusterd_svc_stop] 0-management: quotad service is stopped
[2020-06-16 05:16:45.110186] I [rpc-clnt.c:1014:rpc_clnt_connection_init] 0-bitd: setting frame-timeout to 600
[2020-06-16 05:16:45.110292] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: bitd already stopped
[2020-06-16 05:16:45.110305] I [MSGID: 106568] [glusterd-svc-mgmt.c:265:glusterd_svc_stop] 0-management: bitd service is stopped
[2020-06-16 05:16:45.110334] I [rpc-clnt.c:1014:rpc_clnt_connection_init] 0-scrub: setting frame-timeout to 600
[2020-06-16 05:16:45.110436] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: scrub already stopped
[2020-06-16 05:16:45.110448] I [MSGID: 106568] [glusterd-svc-mgmt.c:265:glusterd_svc_stop] 0-management: scrub service is stopped
[2020-06-16 05:16:45.110487] I [rpc-clnt.c:1014:rpc_clnt_connection_init] 0-snapd: setting frame-timeout to 600
[2020-06-16 05:16:45.110622] I [rpc-clnt.c:1014:rpc_clnt_connection_init] 0-gfproxyd: setting frame-timeout to 600
[2020-06-16 05:16:45.112566] I [rpc-clnt.c:1014:rpc_clnt_connection_init] 0-glustershd: setting frame-timeout to 600
[2020-06-16 05:16:45.112675] I [MSGID: 106567] [glusterd-svc-mgmt.c:230:glusterd_svc_start] 0-management: Starting glustershd service
[2020-06-16 05:16:46.115547] I [MSGID: 106492] [glusterd-handler.c:2619:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 0e679115-15ad-4a8a-9d0a-91b48471ef90
[2020-06-16 05:16:46.117571] I [MSGID: 106502] [glusterd-handler.c:2660:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2020-06-16 05:16:46.117926] I [MSGID: 106493] [glusterd-rpc-ops.c:681:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 0e679115-15ad-4a8a-9d0a-91b48471ef90
[2020-06-16 05:16:46.119809] I [MSGID: 106163] [glusterd-handshake.c:1433:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 70200
[2020-06-16 05:16:46.122993] I [MSGID: 106496] [glusterd-handshake.c:935:__server_getspec] 0-management: Received mount request for volume glustervol.node3.data
[2020-06-16 05:16:46.123928] I [MSGID: 106496] [glusterd-handshake.c:935:__server_getspec] 0-management: Received mount request for volume shd/glustervol
[2020-06-16 05:16:46.124064] I [MSGID: 106490] [glusterd-handler.c:2434:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: 0e679115-15ad-4a8a-9d0a-91b48471ef90
[2020-06-16 05:16:46.124978] I [MSGID: 106493] [glusterd-handler.c:3715:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to node1 (0), ret: 0, op_ret: 0
[2020-06-16 05:16:46.127951] I [MSGID: 106492] [glusterd-handler.c:2619:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 0e679115-15ad-4a8a-9d0a-91b48471ef90
[2020-06-16 05:16:46.128955] I [MSGID: 106502] [glusterd-handler.c:2660:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2020-06-16 05:16:46.129022] I [MSGID: 106493] [glusterd-rpc-ops.c:681:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 0e679115-15ad-4a8a-9d0a-91b48471ef90
[2020-06-16 05:16:46.174520] I [MSGID: 106142] [glusterd-pmap.c:290:pmap_registry_bind] 0-pmap: adding brick /data on port 49152
[2020-06-16 05:17:12.792299] I [MSGID: 106493] [glusterd-rpc-ops.c:468:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: 785a7c5b-86d3-45b9-b371-7e96e7fa88e0, host: node2, port: 0
[2020-06-16 05:17:12.794855] I [MSGID: 106492] [glusterd-handler.c:2619:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 785a7c5b-86d3-45b9-b371-7e96e7fa88e0
[2020-06-16 05:17:12.796022] I [MSGID: 106502] [glusterd-handler.c:2660:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2020-06-16 05:17:12.798022] I [MSGID: 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding svc glustershd (volume=glustervol) to existing process with pid 2316
[2020-06-16 05:17:12.798243] I [MSGID: 106493] [glusterd-rpc-ops.c:681:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 785a7c5b-86d3-45b9-b371-7e96e7fa88e0
[2020-06-16 05:17:12.798674] I [MSGID: 106163] [glusterd-handshake.c:1433:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 70200
[2020-06-16 05:17:12.800671] I [MSGID: 106617] [glusterd-svc-helper.c:680:glusterd_svc_attach_cbk] 0-management: svc glustershd of volume glustervol attached successfully to pid 2316
[2020-06-16 05:17:12.801336] I [MSGID: 106490] [glusterd-handler.c:2434:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: 785a7c5b-86d3-45b9-b371-7e96e7fa88e0
[2020-06-16 05:17:12.802348] I [MSGID: 106493] [glusterd-handler.c:3715:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to node2 (0), ret: 0, op_ret: 0
[2020-06-16 05:17:12.804660] I [MSGID: 106492] [glusterd-handler.c:2619:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 785a7c5b-86d3-45b9-b371-7e96e7fa88e0
[2020-06-16 05:17:12.805701] I [MSGID: 106502] [glusterd-handler.c:2660:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2020-06-16 05:17:12.805793] I [MSGID: 106493] [glusterd-rpc-ops.c:681:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 785a7c5b-86d3-45b9-b371-7e96e7fa88e0
[2020-06-16 06:16:30.299892] E [MSGID: 101097] [xlator.c:218:xlator_volopt_dynload] 0-xlator: dlsym(xlator_api) missing: /usr/lib64/glusterfs/7.5/rpc-transport/socket.so: undefined symbol: xlator_api
[2020-06-16 06:16:30.301920] W [MSGID: 101095] [xlator.c:210:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/7.5/xlator/nfs/server.so: cannot open shared object file: No such file or directory
[2020-06-16 06:16:30.308973] W [MSGID: 101095] [xlator.c:210:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/7.5/xlator/storage/bd.so: cannot open shared object file: No such file or directory
The message "E [MSGID: 101097] [xlator.c:218:xlator_volopt_dynload] 0-xlator: dlsym(xlator_api) missing: /usr/lib64/glusterfs/7.5/rpc-transport/socket.so: undefined symbol: xlator_api" repeated 7 times between [2020-06-16 06:16:30.299892] and [2020-06-16 06:16:30.299974]
The message "W [MSGID: 101095] [xlator.c:210:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/7.5/xlator/nfs/server.so: cannot open shared object file: No such file or directory" repeated 30 times between [2020-06-16 06:16:30.301920] and [2020-06-16 06:16:30.302136]
[2020-06-16 06:17:08.349669] W [glusterfsd.c:1596:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7ea5) [0x7fedf1507ea5] -->/usr/sbin/glusterd(glusterfs_sigwaiter+0xe5) [0x55daebf78625] -->/usr/sbin/glusterd(cleanup_and_exit+0x6b) [0x55daebf7848b] ) 0-: received signum (15), shutting down
[2020-06-16 06:17:45.418434] I [MSGID: 100030] [glusterfsd.c:2867:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 7.5 (args: /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO)
[2020-06-16 06:17:45.419643] I [glusterfsd.c:2594:daemonize] 0-glusterfs: Pid of current running process is 1617
[2020-06-16 06:17:45.451147] I [MSGID: 106478] [glusterd.c:1426:init] 0-management: Maximum allowed open file descriptors set to 65536
[2020-06-16 06:17:45.451248] I [MSGID: 106479] [glusterd.c:1482:init] 0-management: Using /var/lib/glusterd as working directory
[2020-06-16 06:17:45.451269] I [MSGID: 106479] [glusterd.c:1488:init] 0-management: Using /var/run/gluster as pid file working directory
[2020-06-16 06:17:45.464643] I [socket.c:1015:__socket_server_bind] 0-socket.management: process started listening on port (24007)
[2020-06-16 06:17:45.464817] E [rpc-transport.c:300:rpc_transport_load] 0-rpc-transport: /usr/lib64/glusterfs/7.5/rpc-transport/rdma.so: cannot open shared object file: No such file or directory
[2020-06-16 06:17:45.464826] W [rpc-transport.c:304:rpc_transport_load] 0-rpc-transport: volume 'rdma.management': transport-type 'rdma' is not valid or not found on this machine
[2020-06-16 06:17:45.464869] W [rpcsvc.c:1981:rpcsvc_create_listener] 0-rpc-service: cannot create listener, initing the transport failed
[2020-06-16 06:17:45.464876] E [MSGID: 106244] [glusterd.c:1781:init] 0-management: creation of 1 listeners failed, continuing with succeeded transport
[2020-06-16 06:17:45.468040] I [socket.c:958:__socket_server_bind] 0-socket.management: closing (AF_UNIX) reuse check socket 12
[2020-06-16 06:17:45.469795] I [MSGID: 106059] [glusterd.c:1865:init] 0-management: max-port override: 60999
[2020-06-16 06:17:45.487597] I [MSGID: 106228] [glusterd.c:484:glusterd_check_gsync_present] 0-glusterd: geo-replication module not installed in the system [No such file or directory]
[2020-06-16 06:17:45.490370] I [MSGID: 106513] [glusterd-store.c:2257:glusterd_restore_op_version] 0-glusterd: retrieved op-version: 70200
[2020-06-16 06:17:45.495752] W [MSGID: 106204] [glusterd-store.c:3275:glusterd_store_update_volinfo] 0-management: Unknown key: tier-enabled
[2020-06-16 06:17:45.495918] W [MSGID: 106204] [glusterd-store.c:3275:glusterd_store_update_volinfo] 0-management: Unknown key: brick-0
[2020-06-16 06:17:45.495953] W [MSGID: 106204] [glusterd-store.c:3275:glusterd_store_update_volinfo] 0-management: Unknown key: brick-1
[2020-06-16 06:17:45.495975] W [MSGID: 106204] [glusterd-store.c:3275:glusterd_store_update_volinfo] 0-management: Unknown key: brick-2
[2020-06-16 06:17:45.497134] I [MSGID: 106544] [glusterd.c:152:glusterd_uuid_init] 0-management: retrieved UUID: ec137af6-4845-4ebb-955a-fac1d19b7b6c
[2020-06-16 06:17:45.571610] I [MSGID: 106498] [glusterd-handler.c:3519:glusterd_friend_add_from_peerinfo] 0-management: connect returned 0
[2020-06-16 06:17:45.572321] I [MSGID: 106498] [glusterd-handler.c:3519:glusterd_friend_add_from_peerinfo] 0-management: connect returned 0
[2020-06-16 06:17:45.572439] W [MSGID: 106061] [glusterd-handler.c:3315:glusterd_transport_inet_options_build] 0-glusterd: Failed to get tcp-user-timeout
[2020-06-16 06:17:45.572515] I [rpc-clnt.c:1014:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2020-06-16 06:17:45.650858] I [rpc-clnt.c:1014:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
Final graph:
+------------------------------------------------------------------------------+
  1: volume management
  2:     type mgmt/glusterd
  3:     option rpc-auth.auth-glusterfs on
  4:     option rpc-auth.auth-unix on
  5:     option rpc-auth.auth-null on
  6:     option rpc-auth-allow-insecure on
  7:     option transport.listen-backlog 1024
  8:     option max-port 60999
  9:     option event-threads 1
 10:     option ping-timeout 0
 11:     option transport.rdma.listen-port 24008
 12:     option transport.socket.listen-port 24007
 13:     option transport.socket.read-fail-log off
 14:     option transport.socket.keepalive-interval 2
 15:     option transport.socket.keepalive-time 10
 16:     option transport-type rdma
 17:     option working-directory /var/lib/glusterd
 18: end-volume
 19:
+------------------------------------------------------------------------------+
[2020-06-16 06:17:45.650831] W [MSGID: 106061] [glusterd-handler.c:3315:glusterd_transport_inet_options_build] 0-glusterd: Failed to get tcp-user-timeout
[2020-06-16 06:17:45.655529] I [MSGID: 101190] [event-epoll.c:682:event_dispatch_epoll_worker] 0-epoll: Started thread with index 0
[2020-06-16 06:17:46.081403] I [MSGID: 106493] [glusterd-rpc-ops.c:468:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: 0e679115-15ad-4a8a-9d0a-91b48471ef90, host: node1, port: 0
[2020-06-16 06:17:46.083229] I [glusterd-utils.c:6582:glusterd_brick_start] 0-management: starting a fresh brick process for brick /data
[2020-06-16 06:17:46.086021] I [rpc-clnt.c:1014:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2020-06-16 06:17:46.094817] I [rpc-clnt.c:1014:rpc_clnt_connection_init] 0-nfs: setting frame-timeout to 600
[2020-06-16 06:17:46.095010] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: nfs already stopped
[2020-06-16 06:17:46.095058] I [MSGID: 106568] [glusterd-svc-mgmt.c:265:glusterd_svc_stop] 0-management: nfs service is stopped
[2020-06-16 06:17:46.095086] I [MSGID: 106599] [glusterd-nfs-svc.c:81:glusterd_nfssvc_manager] 0-management: nfs/server.so xlator is not installed
[2020-06-16 06:17:46.095141] I [rpc-clnt.c:1014:rpc_clnt_connection_init] 0-quotad: setting frame-timeout to 600
[2020-06-16 06:17:46.095345] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: quotad already stopped
[2020-06-16 06:17:46.095368] I [MSGID: 106568] [glusterd-svc-mgmt.c:265:glusterd_svc_stop] 0-management: quotad service is stopped
[2020-06-16 06:17:46.095417] I [rpc-clnt.c:1014:rpc_clnt_connection_init] 0-bitd: setting frame-timeout to 600
[2020-06-16 06:17:46.095599] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: bitd already stopped
[2020-06-16 06:17:46.095620] I [MSGID: 106568] [glusterd-svc-mgmt.c:265:glusterd_svc_stop] 0-management: bitd service is stopped
[2020-06-16 06:17:46.095667] I [rpc-clnt.c:1014:rpc_clnt_connection_init] 0-scrub: setting frame-timeout to 600
[2020-06-16 06:17:46.095844] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: scrub already stopped
[2020-06-16 06:17:46.095864] I [MSGID: 106568] [glusterd-svc-mgmt.c:265:glusterd_svc_stop] 0-management: scrub service is stopped
[2020-06-16 06:17:46.095935] I [rpc-clnt.c:1014:rpc_clnt_connection_init] 0-snapd: setting frame-timeout to 600
[2020-06-16 06:17:46.096156] I [rpc-clnt.c:1014:rpc_clnt_connection_init] 0-gfproxyd: setting frame-timeout to 600
[2020-06-16 06:17:46.099010] I [rpc-clnt.c:1014:rpc_clnt_connection_init] 0-glustershd: setting frame-timeout to 600
[2020-06-16 06:17:46.099182] I [MSGID: 106567] [glusterd-svc-mgmt.c:230:glusterd_svc_start] 0-management: Starting glustershd service
[2020-06-16 06:17:47.101737] I [MSGID: 106492] [glusterd-handler.c:2619:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 0e679115-15ad-4a8a-9d0a-91b48471ef90
[2020-06-16 06:17:47.103848] I [MSGID: 106502] [glusterd-handler.c:2660:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2020-06-16 06:17:47.105043] I [MSGID: 106493] [glusterd-rpc-ops.c:681:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 0e679115-15ad-4a8a-9d0a-91b48471ef90
[2020-06-16 06:17:47.105371] I [MSGID: 106496] [glusterd-handshake.c:935:__server_getspec] 0-management: Received mount request for volume glustervol.node3.data
[2020-06-16 06:17:47.106275] I [MSGID: 106496] [glusterd-handshake.c:935:__server_getspec] 0-management: Received mount request for volume shd/glustervol
[2020-06-16 06:17:47.106379] I [MSGID: 106163] [glusterd-handshake.c:1433:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 70200
[2020-06-16 06:17:47.110877] I [MSGID: 106490] [glusterd-handler.c:2434:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: 0e679115-15ad-4a8a-9d0a-91b48471ef90
[2020-06-16 06:17:47.111764] I [MSGID: 106493] [glusterd-handler.c:3715:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to node1 (0), ret: 0, op_ret: 0
[2020-06-16 06:17:47.114688] I [MSGID: 106492] [glusterd-handler.c:2619:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 0e679115-15ad-4a8a-9d0a-91b48471ef90
[2020-06-16 06:17:47.116358] I [MSGID: 106502] [glusterd-handler.c:2660:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2020-06-16 06:17:47.116426] I [MSGID: 106493] [glusterd-rpc-ops.c:681:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 0e679115-15ad-4a8a-9d0a-91b48471ef90
[2020-06-16 06:17:47.167267] I [MSGID: 106142] [glusterd-pmap.c:290:pmap_registry_bind] 0-pmap: adding brick /data on port 49152
[2020-06-16 06:18:14.847449] I [MSGID: 106493] [glusterd-rpc-ops.c:468:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: 785a7c5b-86d3-45b9-b371-7e96e7fa88e0, host: node2, port: 0
[2020-06-16 06:18:14.851972] I [MSGID: 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding svc glustershd (volume=glustervol) to existing process with pid 1894
[2020-06-16 06:18:14.852224] I [MSGID: 106492] [glusterd-handler.c:2619:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 785a7c5b-86d3-45b9-b371-7e96e7fa88e0
[2020-06-16 06:18:14.853711] I [MSGID: 106502] [glusterd-handler.c:2660:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2020-06-16 06:18:14.853997] I [MSGID: 106617] [glusterd-svc-helper.c:680:glusterd_svc_attach_cbk] 0-management: svc glustershd of volume glustervol attached successfully to pid 1894
[2020-06-16 06:18:14.854639] I [MSGID: 106493] [glusterd-rpc-ops.c:681:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 785a7c5b-86d3-45b9-b371-7e96e7fa88e0
[2020-06-16 06:18:14.855111] I [MSGID: 106163] [glusterd-handshake.c:1433:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 70200
[2020-06-16 06:18:14.857380] I [MSGID: 106490] [glusterd-handler.c:2434:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: 785a7c5b-86d3-45b9-b371-7e96e7fa88e0
[2020-06-16 06:18:14.858430] I [MSGID: 106493] [glusterd-handler.c:3715:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to node2 (0), ret: 0, op_ret: 0
[2020-06-16 06:18:14.860626] I [MSGID: 106492] [glusterd-handler.c:2619:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 785a7c5b-86d3-45b9-b371-7e96e7fa88e0
[2020-06-16 06:18:14.861822] I [MSGID: 106502] [glusterd-handler.c:2660:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2020-06-16 06:18:14.861921] I [MSGID: 106493] [glusterd-rpc-ops.c:681:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 785a7c5b-86d3-45b9-b371-7e96e7fa88e0
[2020-06-16 06:46:35.080176] E [MSGID: 101097] [xlator.c:218:xlator_volopt_dynload] 0-xlator: dlsym(xlator_api) missing: /usr/lib64/glusterfs/7.5/rpc-transport/socket.so: undefined symbol: xlator_api
[2020-06-16 06:46:35.082275] W [MSGID: 101095] [xlator.c:210:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/7.5/xlator/nfs/server.so: cannot open shared object file: No such file or directory
[2020-06-16 06:46:35.089377] W [MSGID: 101095] [xlator.c:210:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/7.5/xlator/storage/bd.so: cannot open shared object file: No such file or directory
The message "E [MSGID: 101097] [xlator.c:218:xlator_volopt_dynload] 0-xlator: dlsym(xlator_api) missing: /usr/lib64/glusterfs/7.5/rpc-transport/socket.so: undefined symbol: xlator_api" repeated 7 times between [2020-06-16 06:46:35.080176] and [2020-06-16 06:46:35.080259]
The message "W [MSGID: 101095] [xlator.c:210:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/7.5/xlator/nfs/server.so: cannot open shared object file: No such file or directory" repeated 30 times between [2020-06-16 06:46:35.082275] and [2020-06-16 06:46:35.082481]
[2020-06-16 06:54:53.054246] E [MSGID: 101097] [xlator.c:218:xlator_volopt_dynload] 0-xlator: dlsym(xlator_api) missing: /usr/lib64/glusterfs/7.5/rpc-transport/socket.so: undefined symbol: xlator_api
[2020-06-16 06:54:53.055110] W [MSGID: 101095] [xlator.c:210:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/7.5/xlator/nfs/server.so: cannot open shared object file: No such file or directory
[2020-06-16 06:54:53.062173] W [MSGID: 101095] [xlator.c:210:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/7.5/xlator/storage/bd.so: cannot open shared object file: No such file or directory
The message "E [MSGID: 101097] [xlator.c:218:xlator_volopt_dynload] 0-xlator: dlsym(xlator_api) missing: /usr/lib64/glusterfs/7.5/rpc-transport/socket.so: undefined symbol: xlator_api" repeated 7 times between [2020-06-16 06:54:53.054246] and [2020-06-16 06:54:53.054332]
The message "W [MSGID: 101095] [xlator.c:210:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/7.5/xlator/nfs/server.so: cannot open shared object file: No such file or directory" repeated 30 times between [2020-06-16 06:54:53.055110] and [2020-06-16 06:54:53.055343]
[2020-06-16 07:04:45.626884] W [glusterfsd.c:1596:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7ea5) [0x7f6e6cc74ea5] -->/usr/sbin/glusterd(glusterfs_sigwaiter+0xe5) [0x5564f4a90625] -->/usr/sbin/glusterd(cleanup_and_exit+0x6b) [0x5564f4a9048b] ) 0-: received signum (15), shutting down
[2020-06-16 07:05:21.272090] I [MSGID: 100030] [glusterfsd.c:2867:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 7.5 (args: /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO)
[2020-06-16 07:05:21.273749] I [glusterfsd.c:2594:daemonize] 0-glusterfs: Pid of current running process is 1596
[2020-06-16 07:05:21.303478] I [MSGID: 106478] [glusterd.c:1426:init] 0-management: Maximum allowed open file descriptors set to 65536
[2020-06-16 07:05:21.303534] I [MSGID: 106479] [glusterd.c:1482:init] 0-management: Using /var/lib/glusterd as working directory
[2020-06-16 07:05:21.303546] I [MSGID: 106479] [glusterd.c:1488:init] 0-management: Using /var/run/gluster as pid file working directory
[2020-06-16 07:05:21.314122] I [socket.c:1015:__socket_server_bind] 0-socket.management: process started listening on port (24007)
[2020-06-16 07:05:21.314319] E [rpc-transport.c:300:rpc_transport_load] 0-rpc-transport: /usr/lib64/glusterfs/7.5/rpc-transport/rdma.so: cannot open shared object file: No such file or directory
[2020-06-16 07:05:21.314329] W [rpc-transport.c:304:rpc_transport_load] 0-rpc-transport: volume 'rdma.management': transport-type 'rdma' is not valid or not found on this machine
[2020-06-16 07:05:21.314381] W [rpcsvc.c:1981:rpcsvc_create_listener] 0-rpc-service: cannot create listener, initing the transport failed
[2020-06-16 07:05:21.314390] E [MSGID: 106244] [glusterd.c:1781:init] 0-management: creation of 1 listeners failed, continuing with succeeded transport
[2020-06-16 07:05:21.317730] I [socket.c:958:__socket_server_bind] 0-socket.management: closing (AF_UNIX) reuse check socket 12
[2020-06-16 07:05:21.319763] I [MSGID: 106059] [glusterd.c:1865:init] 0-management: max-port override: 60999
[2020-06-16 07:05:21.331491] I [MSGID: 106228] [glusterd.c:484:glusterd_check_gsync_present] 0-glusterd: geo-replication module not installed in the system [No such file or directory]
[2020-06-16 07:05:21.332890] I [MSGID: 106513] [glusterd-store.c:2257:glusterd_restore_op_version] 0-glusterd: retrieved op-version: 70200
[2020-06-16 07:05:21.336997] W [MSGID: 106204] [glusterd-store.c:3275:glusterd_store_update_volinfo] 0-management: Unknown key: tier-enabled
[2020-06-16 07:05:21.337094] W [MSGID: 106204] [glusterd-store.c:3275:glusterd_store_update_volinfo] 0-management: Unknown key: brick-0
[2020-06-16 07:05:21.337112] W [MSGID: 106204] [glusterd-store.c:3275:glusterd_store_update_volinfo] 0-management: Unknown key: brick-1
[2020-06-16 07:05:21.337124] W [MSGID: 106204] [glusterd-store.c:3275:glusterd_store_update_volinfo] 0-management: Unknown key: brick-2
[2020-06-16 07:05:21.338299] I [MSGID: 106544] [glusterd.c:152:glusterd_uuid_init] 0-management: retrieved UUID: ec137af6-4845-4ebb-955a-fac1d19b7b6c
[2020-06-16 07:05:21.398317] I [MSGID: 106498] [glusterd-handler.c:3519:glusterd_friend_add_from_peerinfo] 0-management: connect returned 0
[2020-06-16 07:05:21.398869] I [MSGID: 106498] [glusterd-handler.c:3519:glusterd_friend_add_from_peerinfo] 0-management: connect returned 0
[2020-06-16 07:05:21.399233] W [MSGID: 106061] [glusterd-handler.c:3315:glusterd_transport_inet_options_build] 0-glusterd: Failed to get tcp-user-timeout
[2020-06-16 07:05:21.399272] I [rpc-clnt.c:1014:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2020-06-16 07:05:21.404020] I [rpc-clnt.c:1014:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
Final graph:
+------------------------------------------------------------------------------+
  1: volume management
  2:     type mgmt/glusterd
  3:     option rpc-auth.auth-glusterfs on
  4:     option rpc-auth.auth-unix on
  5:     option rpc-auth.auth-null on
  6:     option rpc-auth-allow-insecure on
  7:     option transport.listen-backlog 1024
  8:     option max-port 60999
  9:     option event-threads 1
 10:     option ping-timeout 0
 11:     option transport.rdma.listen-port 24008
 12:     option transport.socket.listen-port 24007
 13:     option transport.socket.read-fail-log off
 14:     option transport.socket.keepalive-interval 2
 15:     option transport.socket.keepalive-time 10
 16:     option transport-type rdma
 17:     option working-directory /var/lib/glusterd
 18: end-volume
 19:
+------------------------------------------------------------------------------+
[2020-06-16 07:05:21.404009] W [MSGID: 106061] [glusterd-handler.c:3315:glusterd_transport_inet_options_build] 0-glusterd: Failed to get tcp-user-timeout
[2020-06-16 07:05:21.409673] I [MSGID: 101190] [event-epoll.c:682:event_dispatch_epoll_worker] 0-epoll: Started thread with index 0
[2020-06-16 07:05:22.078595] I [MSGID: 106493] [glusterd-rpc-ops.c:468:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: 785a7c5b-86d3-45b9-b371-7e96e7fa88e0, host: node2, port: 0
[2020-06-16 07:05:22.080769] I [glusterd-utils.c:6582:glusterd_brick_start] 0-management: starting a fresh brick process for brick /data
[2020-06-16 07:05:22.084381] I [rpc-clnt.c:1014:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2020-06-16 07:05:22.092594] I [rpc-clnt.c:1014:rpc_clnt_connection_init] 0-nfs: setting frame-timeout to 600
[2020-06-16 07:05:22.092701] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: nfs already stopped
[2020-06-16 07:05:22.092730] I [MSGID: 106568] [glusterd-svc-mgmt.c:265:glusterd_svc_stop] 0-management: nfs service is stopped
[2020-06-16 07:05:22.092746] I [MSGID: 106599] [glusterd-nfs-svc.c:81:glusterd_nfssvc_manager] 0-management: nfs/server.so xlator is not installed
[2020-06-16 07:05:22.092781] I [rpc-clnt.c:1014:rpc_clnt_connection_init] 0-quotad: setting frame-timeout to 600
[2020-06-16 07:05:22.092926] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: quotad already stopped
[2020-06-16 07:05:22.092942] I [MSGID: 106568] [glusterd-svc-mgmt.c:265:glusterd_svc_stop] 0-management: quotad service is stopped
[2020-06-16 07:05:22.092976] I [rpc-clnt.c:1014:rpc_clnt_connection_init] 0-bitd: setting frame-timeout to 600
[2020-06-16 07:05:22.093087] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: bitd already stopped
[2020-06-16 07:05:22.093100] I [MSGID: 106568] [glusterd-svc-mgmt.c:265:glusterd_svc_stop] 0-management: bitd service is stopped
[2020-06-16 07:05:22.093130] I [rpc-clnt.c:1014:rpc_clnt_connection_init] 0-scrub: setting frame-timeout to 600
[2020-06-16 07:05:22.093231] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: scrub already stopped
[2020-06-16 07:05:22.093243] I [MSGID: 106568] [glusterd-svc-mgmt.c:265:glusterd_svc_stop] 0-management: scrub service is stopped
[2020-06-16 07:05:22.093278] I [rpc-clnt.c:1014:rpc_clnt_connection_init] 0-snapd: setting frame-timeout to 600
[2020-06-16 07:05:22.093406] I [rpc-clnt.c:1014:rpc_clnt_connection_init] 0-gfproxyd: setting frame-timeout to 600
[2020-06-16 07:05:22.095392] I [rpc-clnt.c:1014:rpc_clnt_connection_init] 0-glustershd: setting frame-timeout to 600
[2020-06-16 07:05:22.095510] I [MSGID: 106567] [glusterd-svc-mgmt.c:230:glusterd_svc_start] 0-management: Starting glustershd service
[2020-06-16 07:05:23.099270] I [MSGID: 106493] [glusterd-rpc-ops.c:681:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 785a7c5b-86d3-45b9-b371-7e96e7fa88e0
[2020-06-16 07:05:23.100420] I [MSGID: 106493] [glusterd-rpc-ops.c:468:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: 0e679115-15ad-4a8a-9d0a-91b48471ef90, host: node1, port: 0
[2020-06-16 07:05:23.101753] I [glusterd-utils.c:6495:glusterd_brick_start] 0-management: discovered already-running brick /data
[2020-06-16 07:05:23.101772] I [MSGID: 106142] [glusterd-pmap.c:290:pmap_registry_bind] 0-pmap: adding brick /data on port 49152
[2020-06-16 07:05:23.103323] I [MSGID: 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding svc glustershd (volume=glustervol) to existing process with pid 1995
[2020-06-16 07:05:23.103508] I [MSGID: 106496] [glusterd-handshake.c:935:__server_getspec] 0-management: Received mount request for volume glustervol.node3.data
[2020-06-16 07:05:23.104454] I [MSGID: 106496] [glusterd-handshake.c:935:__server_getspec] 0-management: Received mount request for volume shd/glustervol
[2020-06-16 07:05:23.104551] I [MSGID: 106493] [glusterd-rpc-ops.c:681:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 0e679115-15ad-4a8a-9d0a-91b48471ef90
[2020-06-16 07:05:23.104624] I [MSGID: 106617] [glusterd-svc-helper.c:680:glusterd_svc_attach_cbk] 0-management: svc glustershd of volume glustervol attached successfully to pid 1995
[2020-06-16 07:05:23.104754] I [MSGID: 106492] [glusterd-handler.c:2619:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 0e679115-15ad-4a8a-9d0a-91b48471ef90
[2020-06-16 07:05:23.106399] I [MSGID: 106502] [glusterd-handler.c:2660:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2020-06-16 07:05:23.106766] I [MSGID: 106163] [glusterd-handshake.c:1433:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 70200
[2020-06-16 07:05:23.110000] I [MSGID: 106163] [glusterd-handshake.c:1433:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 70200
[2020-06-16 07:05:23.111307] I [MSGID: 106490] [glusterd-handler.c:2434:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: 0e679115-15ad-4a8a-9d0a-91b48471ef90
[2020-06-16 07:05:23.112818] I [MSGID: 106493] [glusterd-handler.c:3715:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to node1 (0), ret: 0, op_ret: 0
[2020-06-16 07:05:23.116053] I [MSGID: 106490] [glusterd-handler.c:2434:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: 785a7c5b-86d3-45b9-b371-7e96e7fa88e0
[2020-06-16 07:05:23.117651] I [MSGID: 106493] [glusterd-handler.c:3715:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to node2 (0), ret: 0, op_ret: 0
[2020-06-16 07:05:23.128295] I [MSGID: 106492] [glusterd-handler.c:2619:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 0e679115-15ad-4a8a-9d0a-91b48471ef90
[2020-06-16 07:05:23.129291] I [MSGID: 106502] [glusterd-handler.c:2660:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2020-06-16 07:05:23.129370] I [MSGID: 106493] [glusterd-rpc-ops.c:681:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 0e679115-15ad-4a8a-9d0a-91b48471ef90
[2020-06-16 07:05:23.129445] I [MSGID: 106492] [glusterd-handler.c:2619:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 785a7c5b-86d3-45b9-b371-7e96e7fa88e0
[2020-06-16 07:05:23.130800] I [MSGID: 106502] [glusterd-handler.c:2660:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2020-06-16 07:05:23.130886] I [MSGID: 106493] [glusterd-rpc-ops.c:681:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 785a7c5b-86d3-45b9-b371-7e96e7fa88e0
[2020-06-16 07:05:23.154040] I [MSGID: 106142] [glusterd-pmap.c:290:pmap_registry_bind] 0-pmap: adding brick /data on port 49152
[2020-06-16 07:19:27.418884] E [MSGID: 101097] [xlator.c:218:xlator_volopt_dynload] 0-xlator: dlsym(xlator_api) missing: /usr/lib64/glusterfs/7.5/rpc-transport/socket.so: undefined symbol: xlator_api
[2020-06-16 07:19:27.421557] W [MSGID: 101095] [xlator.c:210:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/7.5/xlator/nfs/server.so: cannot open shared object file: No such file or directory
[2020-06-16 07:19:27.430028] W [MSGID: 101095] [xlator.c:210:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/7.5/xlator/storage/bd.so: cannot open shared object file: No such file or directory
The message "E [MSGID: 101097] [xlator.c:218:xlator_volopt_dynload] 0-xlator: dlsym(xlator_api) missing: /usr/lib64/glusterfs/7.5/rpc-transport/socket.so: undefined symbol: xlator_api" repeated 7 times between [2020-06-16 07:19:27.418884] and [2020-06-16 07:19:27.419013]
The message "W [MSGID: 101095] [xlator.c:210:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/7.5/xlator/nfs/server.so: cannot open shared object file: No such file or directory" repeated 30 times between [2020-06-16 07:19:27.421557] and [2020-06-16 07:19:27.422049]
[2020-06-16 05:14:25.211639] I [MSGID: 100030] [glusterfsd.c:2847:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 6.9 (args: /usr/sbin/glusterfs --process-name fuse --volfile-server=node1.mgmt.windstream.net --volfile-id=/glustervol /mnt)
[2020-06-16 05:14:25.213001] I [glusterfsd.c:2556:daemonize] 0-glusterfs: Pid of current running process is 100753
[2020-06-16 05:14:25.227855] I [MSGID: 101190] [event-epoll.c:688:event_dispatch_epoll_worker] 0-epoll: Started thread with index 0
[2020-06-16 05:14:25.227904] I [MSGID: 101190] [event-epoll.c:688:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2020-06-16 05:14:25.237171] I [MSGID: 114020] [client.c:2401:notify] 0-glustervol-client-0: parent translators are ready, attempting connect on transport
[2020-06-16 05:14:25.240502] I [MSGID: 114020] [client.c:2401:notify] 0-glustervol-client-1: parent translators are ready, attempting connect on transport
[2020-06-16 05:14:25.241717] I [rpc-clnt.c:2028:rpc_clnt_reconfig] 0-glustervol-client-0: changing port to 49152 (from 0)
[2020-06-16 05:14:25.241773] I [socket.c:811:__socket_shutdown] 0-glustervol-client-0: intentional socket shutdown(13)
[2020-06-16 05:14:25.245114] I [MSGID: 114020] [client.c:2401:notify] 0-glustervol-client-2: parent translators are ready, attempting connect on transport
[2020-06-16 05:14:25.248268] I [rpc-clnt.c:2028:rpc_clnt_reconfig] 0-glustervol-client-1: changing port to 49153 (from 0)
[2020-06-16 05:14:25.248310] I [socket.c:811:__socket_shutdown] 0-glustervol-client-1: intentional socket shutdown(15)
Final graph:
+------------------------------------------------------------------------------+
  1: volume glustervol-client-0
  2:     type protocol/client
  3:     option ping-timeout 42
  4:     option remote-host node1
  5:     option remote-subvolume /data
  6:     option transport-type socket
  7:     option transport.address-family inet
  8:     option transport.socket.ssl-enabled off
  9:     option transport.tcp-user-timeout 0
 10:     option transport.socket.keepalive-time 20
 11:     option transport.socket.keepalive-interval 2
 12:     option transport.socket.keepalive-count 9
 13:     option send-gids true
 14: end-volume
 15:
 16: volume glustervol-client-1
 17:     type protocol/client
 18:     option ping-timeout 42
 19:     option remote-host node2
 20:     option remote-subvolume /data
 21:     option transport-type socket
 22:     option transport.address-family inet
 23:     option transport.socket.ssl-enabled off
 24:     option transport.tcp-user-timeout 0
 25:     option transport.socket.keepalive-time 20
 26:     option transport.socket.keepalive-interval 2
 27:     option transport.socket.keepalive-count 9
 28:     option send-gids true
 29: end-volume
 30:
 31: volume glustervol-client-2
 32:     type protocol/client
 33:     option ping-timeout 42
 34:     option remote-host node3
 35:     option remote-subvolume /data
 36:     option transport-type socket
 37:     option transport.address-family inet
 38:     option transport.socket.ssl-enabled off
 39:     option transport.tcp-user-timeout 0
 40:     option transport.socket.keepalive-time 20
 41:     option transport.socket.keepalive-interval 2
 42:     option transport.socket.keepalive-count 9
 43:     option send-gids true
 44: end-volume
 45:
 46: volume glustervol-replicate-0
 47:     type cluster/replicate
 48:     option afr-pending-xattr glustervol-client-0,glustervol-client-1,glustervol-client-2
 49:     option use-compound-fops off
 50:     subvolumes glustervol-client-0 glustervol-client-1 glustervol-client-2
 51: end-volume
 52:
 53: volume glustervol-dht
 54:     type cluster/distribute
 55:     option lock-migration off
 56:     option force-migration off
 57:     subvolumes glustervol-replicate-0
 58: end-volume
 59:
 60: volume glustervol-utime
 61:     type features/utime
 62:     option noatime on
 63:     subvolumes glustervol-dht
 64: end-volume
 65:
 66: volume glustervol-write-behind
 67:     type performance/write-behind
 68:     subvolumes glustervol-utime
 69: end-volume
 70:
 71: volume glustervol-read-ahead
 72:     type performance/read-ahead
 73:     subvolumes glustervol-write-behind
 74: end-volume
 75:
 76: volume glustervol-readdir-ahead
 77:     type performance/readdir-ahead
 78:     option parallel-readdir off
 79:     option rda-request-size 131072
 80:     option rda-cache-limit 10MB
 81:     subvolumes glustervol-read-ahead
 82: end-volume
 83:
 84: volume glustervol-io-cache
 85:     type performance/io-cache
 86:     subvolumes glustervol-readdir-ahead
 87: end-volume
 88:
 89: volume glustervol-open-behind
 90:     type performance/open-behind
 91:     subvolumes glustervol-io-cache
 92: end-volume
 93:
 94: volume glustervol-quick-read
 95:     type performance/quick-read
 96:     subvolumes glustervol-open-behind
 97: end-volume
 98:
 99: volume glustervol-md-cache
100:     type performance/md-cache
101:     subvolumes glustervol-quick-read
102: end-volume
103:
104: volume glustervol
105:     type debug/io-stats
106:     option log-level INFO
107:     option threads 16
108:     option latency-measurement off
109:     option count-fop-hits off
110:     option global-threading off
111:     subvolumes glustervol-md-cache
112: end-volume
113:
114: volume meta-autoload
115:     type meta
116:     subvolumes glustervol
117: end-volume
118:
+------------------------------------------------------------------------------+
[2020-06-16 05:14:25.253143] I [rpc-clnt.c:2028:rpc_clnt_reconfig] 0-glustervol-client-2: changing port to 49152 (from 0)
[2020-06-16 05:14:25.253173] I [socket.c:811:__socket_shutdown] 0-glustervol-client-2: intentional socket shutdown(14)
[2020-06-16 05:14:25.256136] I [MSGID: 114046] [client-handshake.c:1105:client_setvolume_cbk] 0-glustervol-client-0: Connected to glustervol-client-0, attached to remote volume '/data'.
[2020-06-16 05:14:25.256179] I [MSGID: 108005] [afr-common.c:5247:__afr_handle_child_up_event] 0-glustervol-replicate-0: Subvolume 'glustervol-client-0' came back up; going online.
[2020-06-16 05:14:25.257972] I [MSGID: 114046] [client-handshake.c:1105:client_setvolume_cbk] 0-glustervol-client-1: Connected to glustervol-client-1, attached to remote volume '/data'.
[2020-06-16 05:14:25.258014] I [MSGID: 108002] [afr-common.c:5609:afr_notify] 0-glustervol-replicate-0: Client-quorum is met
[2020-06-16 05:14:25.260312] I [MSGID: 114046] [client-handshake.c:1105:client_setvolume_cbk] 0-glustervol-client-2: Connected to glustervol-client-2, attached to remote volume '/data'.
[2020-06-16 05:14:25.261935] I [fuse-bridge.c:5145:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 kernel 7.23
[2020-06-16 05:14:25.261957] I [fuse-bridge.c:5756:fuse_graph_sync] 0-fuse: switched to graph 0
[2020-06-16 05:16:59.729400] I [MSGID: 114018] [client.c:2331:client_rpc_notify] 0-glustervol-client-2: disconnected from glustervol-client-2. Client process will keep trying to connect to glusterd until brick's port is available
[2020-06-16 05:16:59.730053] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 05:16:08.175698 (xid=0xae)
[2020-06-16 05:16:59.730089] W [MSGID: 114031] [client-rpc-fops_v2.c:2634:client4_0_lookup_cbk] 0-glustervol-client-2: remote operation failed. Path: / (00000000-0000-0000-0000-000000000001) [Transport endpoint is not connected]
[2020-06-16 05:16:59.730336] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 05:16:10.237849 (xid=0xaf)
[2020-06-16 05:16:59.730540] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 05:16:22.694419 (xid=0xb0)
[2020-06-16 05:16:59.731132] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 05:16:27.574139 (xid=0xb1)
[2020-06-16 05:16:59.731319] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GF-DUMP) op(NULL(2)) called at 2020-06-16 05:16:34.231433 (xid=0xb2)
[2020-06-16 05:16:59.731352] W [rpc-clnt-ping.c:210:rpc_clnt_ping_cbk] 0-glustervol-client-2: socket disconnected
[2020-06-16 05:16:59.731464] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 05:16:41.213884 (xid=0xb3)
[2020-06-16 05:16:59.731650] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 05:16:48.756212 (xid=0xb4)
[2020-06-16 05:16:59.731876] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 05:16:52.258940 (xid=0xb5)
[2020-06-16 05:16:59.732060] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 05:16:54.618301 (xid=0xb6)
[2020-06-16 05:16:59.732246] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 05:16:58.288790 (xid=0xb7)
[2020-06-16 05:17:10.245302] I [rpc-clnt.c:2028:rpc_clnt_reconfig] 0-glustervol-client-2: changing port to 49152 (from 0)
[2020-06-16 05:17:10.249896] I [MSGID: 114046] [client-handshake.c:1105:client_setvolume_cbk] 0-glustervol-client-2: Connected to glustervol-client-2, attached to remote volume '/data'.
The message "W [MSGID: 114031] [client-rpc-fops_v2.c:2634:client4_0_lookup_cbk] 0-glustervol-client-2: remote operation failed. Path: / (00000000-0000-0000-0000-000000000001) [Transport endpoint is not connected]" repeated 8 times between [2020-06-16 05:16:59.730089] and [2020-06-16 05:16:59.732278]
[2020-06-16 06:17:52.821639] C [rpc-clnt-ping.c:155:rpc_clnt_ping_timer_expired] 0-glustervol-client-2: server node3:49152 has not responded in the last 42 seconds, disconnecting.
[2020-06-16 06:17:52.822009] I [MSGID: 114018] [client.c:2331:client_rpc_notify] 0-glustervol-client-2: disconnected from glustervol-client-2. Client process will keep trying to connect to glusterd until brick's port is available
[2020-06-16 06:17:52.822445] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 06:17:10.168023 (xid=0xfcf)
[2020-06-16 06:17:52.822483] W [MSGID: 114031] [client-rpc-fops_v2.c:2634:client4_0_lookup_cbk] 0-glustervol-client-2: remote operation failed. Path: / (00000000-0000-0000-0000-000000000001) [Transport endpoint is not connected]
[2020-06-16 06:17:52.822767] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GF-DUMP) op(NULL(2)) called at 2020-06-16 06:17:10.168034 (xid=0xfd0)
[2020-06-16 06:17:52.822781] W [rpc-clnt-ping.c:210:rpc_clnt_ping_cbk] 0-glustervol-client-2: socket disconnected
[2020-06-16 06:17:52.822905] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 06:17:10.691138 (xid=0xfd1)
[2020-06-16 06:17:52.823085] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 06:17:18.817682 (xid=0xfd2)
[2020-06-16 06:17:52.823332] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 06:17:22.440238 (xid=0xfd3)
[2020-06-16 06:17:52.823545] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 06:17:49.602127 (xid=0xfd4)
[2020-06-16 06:17:53.517643] E [fuse-bridge.c:220:check_and_dump_fuse_W] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /usr/lib64/glusterfs/6.9/xlator/mount/fuse.so(+0x8211)[0x7f4a67cf0211] (--> /usr/lib64/glusterfs/6.9/xlator/mount/fuse.so(+0x8aea)[0x7f4a67cf0aea] (--> /lib64/libpthread.so.0(+0x7ea5)[0x7f4a69259ea5] (--> /lib64/libc.so.6(clone+0x6d)[0x7f4a68b1f8dd] ))))) 0-glusterfs-fuse: writing to fuse device failed: No such file or directory
[2020-06-16 06:17:53.517920] E [fuse-bridge.c:220:check_and_dump_fuse_W] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /usr/lib64/glusterfs/6.9/xlator/mount/fuse.so(+0x8211)[0x7f4a67cf0211] (--> /usr/lib64/glusterfs/6.9/xlator/mount/fuse.so(+0x8aea)[0x7f4a67cf0aea] (--> /lib64/libpthread.so.0(+0x7ea5)[0x7f4a69259ea5] (--> /lib64/libc.so.6(clone+0x6d)[0x7f4a68b1f8dd] ))))) 0-glusterfs-fuse: writing to fuse device failed: No such file or directory
[2020-06-16 06:18:03.831899] I [rpc-clnt.c:2028:rpc_clnt_reconfig] 0-glustervol-client-2: changing port to 49152 (from 0)
[2020-06-16 06:18:03.839444] I [MSGID: 114046] [client-handshake.c:1105:client_setvolume_cbk] 0-glustervol-client-2: Connected to glustervol-client-2, attached to remote volume '/data'.
The message "W [MSGID: 114031] [client-rpc-fops_v2.c:2634:client4_0_lookup_cbk] 0-glustervol-client-2: remote operation failed. Path: / (00000000-0000-0000-0000-000000000001) [Transport endpoint is not connected]" repeated 4 times between [2020-06-16 06:17:52.822483] and [2020-06-16 06:17:52.823568]
[2020-06-16 07:05:19.045140] I [MSGID: 114018] [client.c:2331:client_rpc_notify] 0-glustervol-client-2: disconnected from glustervol-client-2. Client process will keep trying to connect to glusterd until brick's port is available
[2020-06-16 07:05:29.301979] I [rpc-clnt.c:2028:rpc_clnt_reconfig] 0-glustervol-client-2: changing port to 49152 (from 0)
[2020-06-16 07:05:29.307263] I [MSGID: 114046] [client-handshake.c:1105:client_setvolume_cbk] 0-glustervol-client-2: Connected to glustervol-client-2, attached to remote volume '/data'.
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux