Hello Andrei,
I did some research. But I didn't find a bug report for this issue. If you want you can create such a report here:
It is the recommended way in GlusterFS to report a bug, github issues are for new features
Regards
David
2018-07-13 11:44 GMT+02:00 Havriliuc Andrei <andrei@xxxxxxxxxxxxx>:
Hello David,
Yeah. I also disabled SSL/TLS for management and communication for Gluster. The problem is that I will have traffic over public WAN for a geo replication instance and I cannot go forward until this is solved.
Was there a bug filed about this? I've looked at the issues on the Glusterfs github page but I didn't find anything related to this:
https://github.com/gluster/
glusterfs/issues
Regards,
Andrei
On 7/12/18 4:03 PM, David Spisla wrote:
Hello Andrei,
I am also using Gluster 4.1 on CentOS and I have the same problem. I tested it with a volume which had no network encryption and one with network encryption. You are not the only one.It seems to be a bug. At the moment there is no other choice to disable client.ssl and sever.ssl on a volume to have stable I/O.
RegardsDavid
2018-07-12 10:45 GMT+02:00 Havriliuc Andrei <andrei@xxxxxxxxxxxxx>:
Hello,
I am doing some tests with GlusterFS 4.0 and I can't seem to solve some SSL/TLS issues. I am trying to set up a 2 node replicated gluster volume with SSL/TLS. For this setup, I use 3 KVM VMs (2 storage nodes + 1 client node). For the networking part, I use a dedicated private LAN for the KVM VMs. Each VM is able to ping the other, so there's no problem with the connectivity.
To try to make the procedure I used as clear as possible, I will put all commands in chronological order:
=====================================================
1. First, I update the systems, install ntp and then reboot:
yum update
yum install ntp
systemctl status ntpd
systemctl start ntpd
systemctl enable ntpd
systemctl status ntpd
=====================================================
2. I use a separate partition within the two VM storage nodes. Each of the two nodes see the separate partition as /dev/sdb. After creating a thinly provisioned LV on each of the two nodes, I create an XFS filesystem on them:
pvcreate /dev/sdb1
vgcreate -s 32M vg_glusterfs /dev/sdb1
lvcreate -L 20G --thinpool glusterfs_thin_pool vg_glusterfs
lvcreate -V 15G --thin -n glusterfs_thin_vol1 vg_glusterfs/glusterfs_thin_pool
mkfs.xfs -i size=512 /dev/vg_glusterfs/glusterfs_thin_vol1
=====================================================
3. On each node, I configured the brick data partition and added the following to /etc/fstab:
mkdir /data
echo "/dev/vg_glusterfs/glusterfs_thin_vol1 /data xfs defaults 1 2" >> /etc/fstab
mount -a
=====================================================
4. After mounting the volume, I see the following in df -Th, which is correct:
[root@gluster1 brick1]# df -Th
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda1 ext4 46G 1.4G 42G 4% /
devtmpfs devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs tmpfs 3.9G 8.6M 3.9G 1% /run
tmpfs tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs tmpfs 783M 0 783M 0% /run/user/0
/dev/mapper/vg_glusterfs-glusterfs_thin_vol1 xfs 15G 34M 15G 1% /data
=====================================================
5. Create specific volume dirs on all storage nodes:
mkdir -pv /data/glusterfs/${HOSTNAME%%.*}/vol01
=====================================================
6. Add entries in /etc/hosts:
vim /etc/hosts
192.168.10.233 gluster1
192.168.10.234 gluster2
192.168.10.237 gluster-client
=====================================================
7. Install gluster from the CentOS SIG:
yum search centos-release-gluster
yum install centos-release-gluster40
yum install glusterfs-server
=====================================================
8. Set up TLS/SSL encryption on all nodes and clients (gluster1, gluster2, gluster-client):
openssl genrsa -out /etc/ssl/glusterfs.key 2048
In gluster1 node:
openssl req -new -x509 -key /etc/ssl/glusterfs.key -subj "/CN=gluster1" -out /etc/ssl/glusterfs.pem
In gluster2 node:
openssl req -new -x509 -key /etc/ssl/glusterfs.key -subj "/CN=gluster2" -out /etc/ssl/glusterfs.pem
In gluster-client node:
openssl req -new -x509 -key /etc/ssl/glusterfs.key -subj "/CN=gluster-client" -out /etc/ssl/glusterfs.pem
=====================================================
9. On another box, I concatenate all of the .pem certificates into a .ca file.
Bring all .pem files locally:
scp gluster1:/etc/ssl/glusterfs.pem gluster01.pem
scp gluster2:/etc/ssl/glusterfs.pem gluster02.pem
scp gluster-client:/etc/ssl/glusterfs.pem gluster-client.pem
For storage nodes, I concatenate all .pem certificates (including the client's .pem):
cat gluster01.pem gluster02.pem gluster-client.pem > glusterfs-nodes.ca
For server clients:
cat gluster01.pem gluster02.pem > glusterfs-client.ca
=====================================================
10. Put glusterfs-nodes.ca file on all the storage nodes (this includes storage nodes .pem + client's .pem):
scp glusterfs-nodes.ca gluster1:/etc/ssl/glusterfs.ca
scp glusterfs-nodes.ca gluster2:/etc/ssl/glusterfs.ca
Put glusterfs-client.ca file on all the storage nodes (this includes only storage nodes .pem):
scp glusterfs-client.ca gluster-client:/etc/ssl/glusterfs.ca
=====================================================
11. Enable management encryption on each node (command ran on gluster1, gluster2, gluster-client):
touch /var/lib/glusterd/secure-access
=====================================================
12. Start, enable and check status of glusterd on gluster1 and gluster2:
systemctl start glusterd
systemctl enable glusterd
systemctl status glusterd
=====================================================
13. Configure trusted storage pool (TSP):
>From gluster1:
gluster peer status
gluster peer probe gluster2
=====================================================
14. Create the replicated gluster volume but don't start it:
gluster volume create vol01 replica 2 transport tcp gluster1:/data/glusterfs/gluster1/vol01/brick1 gluster2:/data/glusterfs/glust er2/vol01/brick1
=====================================================
15. Setup SSL/TLS access to the volume:
gluster volume set vol01 auth.ssl-allow 'gluster01,gluster02,gluster-client'
gluster volume set vol01 client.ssl on
gluster volume set vol01 server.ssl on
gluster volume set vol01 network.ping-timeout "5"
gluster volume start vol01
=====================================================
16. Mount the volume on gluster-client:
mount -t glusterfs gluster1:/vol01 /mnt
After mounting the volume, df -h has the correct output:
[root@gluster-client mnt]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 48G 36G 9.3G 80% /
devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 8.5M 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
tmpfs 379M 0 379M 0% /run/user/0
gluster1:/vol01 15G 187M 15G 2% /mnt
=====================================================
17. If I try to copy an archive, it throws an error after transferring part of the file and the /mnt folder becomes inaccessible for a while:
[root@gluster-client mnt]# cp /root/mybee.tar.gz /mnt/
cp: error writing ‘/mnt/mybee.tar.gz’: Transport endpoint is not connected
cp: failed to extend ‘/mnt/mybee.tar.gz’: Transport endpoint is not connected
cp: failed to close ‘/mnt/mybee.tar.gz’: Transport endpoint is not connected
[root@gluster-client mnt]# ll /mnt
ls: cannot open directory .: Transport endpoint is not connected
=====================================================
This is the log from gluster-client for the mnt mountpoint (mnt.log):
[2018-06-29 07:52:03.869382] I [MSGID: 100030] [glusterfsd.c:2625:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 4.0.2 (args: /usr/sbin/glusterfs --process-name fuse --volfile-server=gluster1 --volfile-id=/vol01 /mnt)
[2018-06-29 07:52:03.878709] I [socket.c:4470:socket_init] 0-glusterfs: SSL support on the I/O path is ENABLED
[2018-06-29 07:52:03.878782] I [socket.c:4473:socket_init] 0-glusterfs: SSL support for glusterd is ENABLED
[2018-06-29 07:52:03.878795] I [socket.c:4490:socket_init] 0-glusterfs: using private polling thread
[2018-06-29 07:52:03.879108] E [socket.c:4541:socket_init] 0-glusterfs: failed to open /etc/ssl/dhparam.pem, DH ciphers are disabled
[2018-06-29 07:52:03.885718] I [MSGID: 101190] [event-epoll.c:609:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2018-06-29 07:52:03.901983] I [MSGID: 101190] [event-epoll.c:609:event_dispatch_epoll_worker] 0-epoll: Started thread with index 2
[2018-06-29 07:52:03.902092] I [socket.c:4470:socket_init] 0-vol01-client-1: SSL support on the I/O path is ENABLED
[2018-06-29 07:52:03.902117] I [socket.c:4473:socket_init] 0-vol01-client-1: SSL support for glusterd is ENABLED
[2018-06-29 07:52:03.902142] I [socket.c:4490:socket_init] 0-vol01-client-1: using private polling thread
[2018-06-29 07:52:03.902266] E [socket.c:4541:socket_init] 0-vol01-client-1: failed to open /etc/ssl/dhparam.pem, DH ciphers are disabled
[2018-06-29 07:52:03.902977] I [socket.c:4470:socket_init] 0-vol01-client-0: SSL support on the I/O path is ENABLED
[2018-06-29 07:52:03.903018] I [socket.c:4473:socket_init] 0-vol01-client-0: SSL support for glusterd is ENABLED
[2018-06-29 07:52:03.903028] I [socket.c:4490:socket_init] 0-vol01-client-0: using private polling thread
[2018-06-29 07:52:03.903161] E [socket.c:4541:socket_init] 0-vol01-client-0: failed to open /etc/ssl/dhparam.pem, DH ciphers are disabled
[2018-06-29 07:52:03.903756] I [MSGID: 114020] [client.c:2300:notify] 0-vol01-client-0: parent translators are ready, attempting connect on transport
[2018-06-29 07:52:03.908432] I [MSGID: 114020] [client.c:2300:notify] 0-vol01-client-1: parent translators are ready, attempting connect on transport
Final graph:
+----------------------------------------------------------- -------------------+
1: volume vol01-client-0
2: type protocol/client
3: option ping-timeout 5
4: option remote-host gluster1
5: option remote-subvolume /data/glusterfs/gluster1/vol01/brick1
6: option transport-type socket
7: option transport.address-family inet
8: option transport.socket.ssl-enabled on
9: option transport.tcp-user-timeout 0
10: option transport.socket.keepalive-time 20
11: option transport.socket.keepalive-interval 2
12: option transport.socket.keepalive-count 9
13: option send-gids true
14: end-volume
15:
16: volume vol01-client-1
17: type protocol/client
18: option ping-timeout 5
19: option remote-host gluster2
20: option remote-subvolume /data/glusterfs/gluster2/vol01/brick1
21: option transport-type socket
22: option transport.address-family inet
23: option transport.socket.ssl-enabled on
24: option transport.tcp-user-timeout 0
25: option transport.socket.keepalive-time 20
26: option transport.socket.keepalive-interval 2
27: option transport.socket.keepalive-count 9
28: option send-gids true
29: end-volume
30:
31: volume vol01-replicate-0
32: type cluster/replicate
33: option afr-pending-xattr vol01-client-0,vol01-client-1
34: option use-compound-fops off
35: subvolumes vol01-client-0 vol01-client-1
36: end-volume
37:
38: volume vol01-dht
39: type cluster/distribute
40: option lock-migration off
41: option force-migration off
42: subvolumes vol01-replicate-0
43: end-volume
44:
45: volume vol01-write-behind
46: type performance/write-behind
47: subvolumes vol01-dht
48: end-volume
49:
50: volume vol01-read-ahead
51: type performance/read-ahead
52: subvolumes vol01-write-behind
53: end-volume
54:
55: volume vol01-readdir-ahead
56: type performance/readdir-ahead
57: option parallel-readdir off
58: option rda-request-size 131072
59: option rda-cache-limit 10MB
60: subvolumes vol01-read-ahead
61: end-volume
62:
63: volume vol01-io-cache
64: type performance/io-cache
65: subvolumes vol01-readdir-ahead
66: end-volume
67:
68: volume vol01-quick-read
69: type performance/quick-read
70: subvolumes vol01-io-cache
71: end-volume
72:
73: volume vol01-open-behind
74: type performance/open-behind
75: subvolumes vol01-quick-read
76: end-volume
77:
78: volume vol01-md-cache
79: type performance/md-cache
80: subvolumes vol01-open-behind
81: end-volume
82:
83: volume vol01
84: type debug/io-stats
85: option log-level INFO
86: option latency-measurement off
87: option count-fop-hits off
88: subvolumes vol01-md-cache
89: end-volume
90:
91: volume meta-autoload
92: type meta
93: subvolumes vol01
94: end-volume
95:
+----------------------------------------------------------- -------------------+
[2018-06-29 07:52:03.917518] W [rpc-clnt.c:1739:rpc_clnt_submit] 0-vol01-client-0: error returned while attempting to connect to host:(null), port:0
[2018-06-29 07:52:03.918459] W [rpc-clnt.c:1739:rpc_clnt_submit] 0-vol01-client-0: error returned while attempting to connect to host:(null), port:0
[2018-06-29 07:52:03.919289] I [rpc-clnt.c:2071:rpc_clnt_reconfig] 0-vol01-client-0: changing port to 49152 (from 0)
[2018-06-29 07:52:03.922903] W [rpc-clnt.c:1739:rpc_clnt_submit] 0-vol01-client-1: error returned while attempting to connect to host:(null), port:0
[2018-06-29 07:52:03.923922] W [rpc-clnt.c:1739:rpc_clnt_submit] 0-vol01-client-1: error returned while attempting to connect to host:(null), port:0
[2018-06-29 07:52:03.925115] I [rpc-clnt.c:2071:rpc_clnt_reconfig] 0-vol01-client-1: changing port to 49152 (from 0)
[2018-06-29 07:52:03.930023] W [rpc-clnt.c:1739:rpc_clnt_submit] 0-vol01-client-0: error returned while attempting to connect to host:(null), port:0
[2018-06-29 07:52:03.930756] W [rpc-clnt.c:1739:rpc_clnt_submit] 0-vol01-client-0: error returned while attempting to connect to host:(null), port:0
[2018-06-29 07:52:03.931859] I [MSGID: 114046] [client-handshake.c:1176:client_setvolume_cbk] 0-vol01-client-0: Connected to vol01-client-0, attached to remote volume '/data/glusterfs/gluster1/vol0 1/brick1'.
[2018-06-29 07:52:03.931894] I [MSGID: 108005] [afr-common.c:5081:__afr_handle_child_up_event] 0-vol01-replicate-0: Subvolume 'vol01-client-0' came back up; going online.
[2018-06-29 07:52:03.938903] W [rpc-clnt.c:1739:rpc_clnt_submit] 0-vol01-client-1: error returned while attempting to connect to host:(null), port:0
[2018-06-29 07:52:03.939803] W [rpc-clnt.c:1739:rpc_clnt_submit] 0-vol01-client-1: error returned while attempting to connect to host:(null), port:0
[2018-06-29 07:52:03.941300] I [MSGID: 114046] [client-handshake.c:1176:client_setvolume_cbk] 0-vol01-client-1: Connected to vol01-client-1, attached to remote volume '/data/glusterfs/gluster2/vol0 1/brick1'.
[2018-06-29 07:52:03.942884] I [fuse-bridge.c:4234:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 kernel 7.22
[2018-06-29 07:52:03.942907] I [fuse-bridge.c:4864:fuse_graph_sync] 0-fuse: switched to graph 0
[2018-06-29 07:52:03.946767] W [socket.c:592:__socket_rwv] 0-vol01-client-0: readv on 192.168.10.233:49152 failed (Input/output error)
[2018-06-29 07:52:03.946791] E [socket.c:2785:socket_poller] 0-vol01-client-0: socket_poller 192.168.10.233:49152 failed (Input/output error)
[2018-06-29 07:52:03.946837] I [MSGID: 114018] [client.c:2227:client_rpc_notify] 0-vol01-client-0: disconnected from vol01-client-0. Client process will keep trying to connect to glusterd until brick's port is available
[2018-06-29 07:52:03.947264] E [rpc-clnt.c:350:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_l og_callingfn+0x13b)[0x7f7c73b8 ae5b] (--> /lib64/libgfrpc.so.0(+0xce4e)[ 0x7f7c73955e4e] (--> /lib64/libgfrpc.so.0(+0xcf6e)[ 0x7f7c73955f6e] (--> /lib64/libgfrpc.so.0(rpc_clnt_ connection_cleanup+0x8d)[0x7f7 c7395760d] (--> /lib64/libgfrpc.so.0(+0xf178)[ 0x7f7c73958178] ))))) 0-vol01-client-0: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2018-06-29 07:52:03.945958 (xid=0xb)
[2018-06-29 07:52:03.947297] W [MSGID: 114031] [client-rpc-fops_v2.c:2540:client4_0_lookup_cbk] 0-vol01-client-0: remote operation failed. Path: / (00000000-0000-0000-0000-00000 0000001) [Transport endpoint is not connected]
[2018-06-29 07:52:03.947455] I [MSGID: 109005] [dht-selfheal.c:2328:dht_selfheal_directory] 0-vol01-dht: Directory selfheal failed: Unable to form layout for directory /
[2018-06-29 07:52:07.885389] W [rpc-clnt.c:1739:rpc_clnt_submit] 0-vol01-client-0: error returned while attempting to connect to host:(null), port:0
[2018-06-29 07:52:07.886457] W [rpc-clnt.c:1739:rpc_clnt_submit] 0-vol01-client-0: error returned while attempting to connect to host:(null), port:0
[2018-06-29 07:52:07.887452] I [rpc-clnt.c:2071:rpc_clnt_reconfig] 0-vol01-client-0: changing port to 49152 (from 0)
[2018-06-29 07:52:07.900627] W [rpc-clnt.c:1739:rpc_clnt_submit] 0-vol01-client-0: error returned while attempting to connect to host:(null), port:0
[2018-06-29 07:52:07.901594] W [rpc-clnt.c:1739:rpc_clnt_submit] 0-vol01-client-0: error returned while attempting to connect to host:(null), port:0
[2018-06-29 07:52:07.903220] I [MSGID: 114046] [client-handshake.c:1176:client_setvolume_cbk] 0-vol01-client-0: Connected to vol01-client-0, attached to remote volume '/data/glusterfs/gluster1/vol0 1/brick1'.
[2018-06-29 07:52:13.062404] I [MSGID: 114018] [client.c:2227:client_rpc_notify] 0-vol01-client-0: disconnected from vol01-client-0. Client process will keep trying to connect to glusterd until brick's port is available
[2018-06-29 07:52:13.062780] E [rpc-clnt.c:350:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_l og_callingfn+0x13b)[0x7f7c73b8 ae5b] (--> /lib64/libgfrpc.so.0(+0xce4e)[ 0x7f7c73955e4e] (--> /lib64/libgfrpc.so.0(+0xcf6e)[ 0x7f7c73955f6e] (--> /lib64/libgfrpc.so.0(rpc_clnt_ connection_cleanup+0x8d)[0x7f7 c7395760d] (--> /lib64/libgfrpc.so.0(+0xf178)[ 0x7f7c73958178] ))))) 0-vol01-client-0: forced unwinding frame type(GlusterFS 4.x v1) op(WRITE(13)) called at 2018-06-29 07:52:13.048761 (xid=0x22)
[2018-06-29 07:52:13.062826] W [MSGID: 114031] [client-rpc-fops_v2.c:658:client4_0_writev_cbk] 0-vol01-client-0: remote operation failed [Transport endpoint is not connected]
[2018-06-29 07:52:13.063017] E [rpc-clnt.c:350:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_l og_callingfn+0x13b)[0x7f7c73b8 ae5b] (--> /lib64/libgfrpc.so.0(+0xce4e)[ 0x7f7c73955e4e] (--> /lib64/libgfrpc.so.0(+0xcf6e)[ 0x7f7c73955f6e] (--> /lib64/libgfrpc.so.0(rpc_clnt_ connection_cleanup+0x8d)[0x7f7 c7395760d] (--> /lib64/libgfrpc.so.0(+0xf178)[ 0x7f7c73958178] ))))) 0-vol01-client-0: forced unwinding frame type(GlusterFS 4.x v1) op(WRITE(13)) called at 2018-06-29 07:52:13.049162 (xid=0x23)
[2018-06-29 07:52:13.063175] E [rpc-clnt.c:350:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_l og_callingfn+0x13b)[0x7f7c73b8 ae5b] (--> /lib64/libgfrpc.so.0(+0xce4e)[ 0x7f7c73955e4e] (--> /lib64/libgfrpc.so.0(+0xcf6e)[ 0x7f7c73955f6e] (--> /lib64/libgfrpc.so.0(rpc_clnt_ connection_cleanup+0x8d)[0x7f7 c7395760d] (--> /lib64/libgfrpc.so.0(+0xf178)[ 0x7f7c73958178] ))))) 0-vol01-client-0: forced unwinding frame type(GlusterFS 4.x v1) op(WRITE(13)) called at 2018-06-29 07:52:13.049865 (xid=0x24)
[2018-06-29 07:52:13.063319] E [rpc-clnt.c:350:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_l og_callingfn+0x13b)[0x7f7c73b8 ae5b] (--> /lib64/libgfrpc.so.0(+0xce4e)[ 0x7f7c73955e4e] (--> /lib64/libgfrpc.so.0(+0xcf6e)[ 0x7f7c73955f6e] (--> /lib64/libgfrpc.so.0(rpc_clnt_ connection_cleanup+0x8d)[0x7f7 c7395760d] (--> /lib64/libgfrpc.so.0(+0xf178)[ 0x7f7c73958178] ))))) 0-vol01-client-0: forced unwinding frame type(GlusterFS 4.x v1) op(WRITE(13)) called at 2018-06-29 07:52:13.050199 (xid=0x25)
[2018-06-29 07:52:13.063473] E [rpc-clnt.c:350:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_l og_callingfn+0x13b)[0x7f7c73b8 ae5b] (--> /lib64/libgfrpc.so.0(+0xce4e)[ 0x7f7c73955e4e] (--> /lib64/libgfrpc.so.0(+0xcf6e)[ 0x7f7c73955f6e] (--> /lib64/libgfrpc.so.0(rpc_clnt_ connection_cleanup+0x8d)[0x7f7 c7395760d] (--> /lib64/libgfrpc.so.0(+0xf178)[ 0x7f7c73958178] ))))) 0-vol01-client-0: forced unwinding frame type(GlusterFS 4.x v1) op(WRITE(13)) called at 2018-06-29 07:52:13.050489 (xid=0x26)
[2018-06-29 07:52:13.063600] E [rpc-clnt.c:350:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_l og_callingfn+0x13b)[0x7f7c73b8 ae5b] (--> /lib64/libgfrpc.so.0(+0xce4e)[ 0x7f7c73955e4e] (--> /lib64/libgfrpc.so.0(+0xcf6e)[ 0x7f7c73955f6e] (--> /lib64/libgfrpc.so.0(rpc_clnt_ connection_cleanup+0x8d)[0x7f7 c7395760d] (--> /lib64/libgfrpc.so.0(+0xf178)[ 0x7f7c73958178] ))))) 0-vol01-client-0: forced unwinding frame type(GlusterFS 4.x v1) op(WRITE(13)) called at 2018-06-29 07:52:13.050757 (xid=0x27)
[2018-06-29 07:52:13.063745] E [rpc-clnt.c:350:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_l og_callingfn+0x13b)[0x7f7c73b8 ae5b] (--> /lib64/libgfrpc.so.0(+0xce4e)[ 0x7f7c73955e4e] (--> /lib64/libgfrpc.so.0(+0xcf6e)[ 0x7f7c73955f6e] (--> /lib64/libgfrpc.so.0(rpc_clnt_ connection_cleanup+0x8d)[0x7f7 c7395760d] (--> /lib64/libgfrpc.so.0(+0xf178)[ 0x7f7c73958178] ))))) 0-vol01-client-0: forced unwinding frame type(GlusterFS 4.x v1) op(WRITE(13)) called at 2018-06-29 07:52:13.051031 (xid=0x28)
[2018-06-29 07:52:13.063883] E [rpc-clnt.c:350:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_l og_callingfn+0x13b)[0x7f7c73b8 ae5b] (--> /lib64/libgfrpc.so.0(+0xce4e)[ 0x7f7c73955e4e] (--> /lib64/libgfrpc.so.0(+0xcf6e)[ 0x7f7c73955f6e] (--> /lib64/libgfrpc.so.0(rpc_clnt_ connection_cleanup+0x8d)[0x7f7 c7395760d] (--> /lib64/libgfrpc.so.0(+0xf178)[ 0x7f7c73958178] ))))) 0-vol01-client-0: forced unwinding frame type(GlusterFS 4.x v1) op(WRITE(13)) called at 2018-06-29 07:52:13.053035 (xid=0x29)
The message "W [MSGID: 114031] [client-rpc-fops_v2.c:658:client4_0_writev_cbk] 0-vol01-client-0: remote operation failed [Transport endpoint is not connected]" repeated 7 times between [2018-06-29 07:52:13.062826] and [2018-06-29 07:52:13.063903]
[2018-06-29 07:52:13.104580] W [MSGID: 114031] [client-rpc-fops_v2.c:1570:client4_0_fxattrop_cbk] 0-vol01-client-0: remote operation failed
[2018-06-29 07:52:13.109230] W [socket.c:592:__socket_rwv] 0-vol01-client-1: writev on 192.168.10.234:49152 failed (No data available)
[2018-06-29 07:52:13.109282] E [socket.c:2777:socket_poller] 0-vol01-client-1: poll error on socket
[2018-06-29 07:52:13.109340] I [MSGID: 114018] [client.c:2227:client_rpc_notify] 0-vol01-client-1: disconnected from vol01-client-1. Client process will keep trying to connect to glusterd until brick's port is available
[2018-06-29 07:52:13.109357] E [MSGID: 108006] [afr-common.c:5158:__afr_handle_child_down_event] 0-vol01-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.
[2018-06-29 07:52:13.109461] I [MSGID: 108006] [afr-common.c:5444:afr_local_init] 0-vol01-replicate-0: no subvolumes up
[2018-06-29 07:52:13.109679] E [rpc-clnt.c:350:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_l og_callingfn+0x13b)[0x7f7c73b8 ae5b] (--> /lib64/libgfrpc.so.0(+0xce4e)[ 0x7f7c73955e4e] (--> /lib64/libgfrpc.so.0(+0xcf6e)[ 0x7f7c73955f6e] (--> /lib64/libgfrpc.so.0(rpc_clnt_ connection_cleanup+0x8d)[0x7f7 c7395760d] (--> /lib64/libgfrpc.so.0(+0xf178)[ 0x7f7c73958178] ))))) 0-vol01-client-1: forced unwinding frame type(GlusterFS 4.x v1) op(FSTAT(25)) called at 2018-06-29 07:52:13.104513 (xid=0x42)
[2018-06-29 07:52:13.109700] W [MSGID: 114031] [client-rpc-fops_v2.c:1260:client4_0_fstat_cbk] 0-vol01-client-1: remote operation failed [Transport endpoint is not connected]
[2018-06-29 07:52:13.109854] E [rpc-clnt.c:350:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_l og_callingfn+0x13b)[0x7f7c73b8 ae5b] (--> /lib64/libgfrpc.so.0(+0xce4e)[ 0x7f7c73955e4e] (--> /lib64/libgfrpc.so.0(+0xcf6e)[ 0x7f7c73955f6e] (--> /lib64/libgfrpc.so.0(rpc_clnt_ connection_cleanup+0x8d)[0x7f7 c7395760d] (--> /lib64/libgfrpc.so.0(+0xf178)[ 0x7f7c73958178] ))))) 0-vol01-client-1: forced unwinding frame type(GlusterFS 4.x v1) op(FXATTROP(34)) called at 2018-06-29 07:52:13.104729 (xid=0x43)
[2018-06-29 07:52:13.109877] W [MSGID: 114031] [client-rpc-fops_v2.c:1570:client4_0_fxattrop_cbk] 0-vol01-client-1: remote operation failed
[2018-06-29 07:52:13.109897] E [MSGID: 114031] [client-rpc-fops_v2.c:1352:client4_0_finodelk_cbk] 0-vol01-client-0: remote operation failed [Transport endpoint is not connected]
[2018-06-29 07:52:13.110036] E [rpc-clnt.c:350:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_l og_callingfn+0x13b)[0x7f7c73b8 ae5b] (--> /lib64/libgfrpc.so.0(+0xce4e)[ 0x7f7c73955e4e] (--> /lib64/libgfrpc.so.0(+0xcf6e)[ 0x7f7c73955f6e] (--> /lib64/libgfrpc.so.0(rpc_clnt_ connection_cleanup+0x8d)[0x7f7 c7395760d] (--> /lib64/libgfrpc.so.0(+0xf178)[ 0x7f7c73958178] ))))) 0-vol01-client-1: forced unwinding frame type(GlusterFS 4.x v1) op(WRITE(13)) called at 2018-06-29 07:52:13.105623 (xid=0x44)
[2018-06-29 07:52:13.110053] W [MSGID: 114031] [client-rpc-fops_v2.c:658:client4_0_writev_cbk] 0-vol01-client-1: remote operation failed [Transport endpoint is not connected]
[2018-06-29 07:52:13.110232] E [rpc-clnt.c:350:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_l og_callingfn+0x13b)[0x7f7c73b8 ae5b] (--> /lib64/libgfrpc.so.0(+0xce4e)[ 0x7f7c73955e4e] (--> /lib64/libgfrpc.so.0(+0xcf6e)[ 0x7f7c73955f6e] (--> /lib64/libgfrpc.so.0(rpc_clnt_ connection_cleanup+0x8d)[0x7f7 c7395760d] (--> /lib64/libgfrpc.so.0(+0xf178)[ 0x7f7c73958178] ))))) 0-vol01-client-1: forced unwinding frame type(GlusterFS 4.x v1) op(FSTAT(25)) called at 2018-06-29 07:52:13.105658 (xid=0x45)
[2018-06-29 07:52:13.110250] W [MSGID: 114031] [client-rpc-fops_v2.c:1260:client4_0_fstat_cbk] 0-vol01-client-1: remote operation failed [Transport endpoint is not connected]
[2018-06-29 07:52:13.110386] E [rpc-clnt.c:350:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_l og_callingfn+0x13b)[0x7f7c73b8 ae5b] (--> /lib64/libgfrpc.so.0(+0xce4e)[ 0x7f7c73955e4e] (--> /lib64/libgfrpc.so.0(+0xcf6e)[ 0x7f7c73955f6e] (--> /lib64/libgfrpc.so.0(rpc_clnt_ connection_cleanup+0x8d)[0x7f7 c7395760d] (--> /lib64/libgfrpc.so.0(+0xf178)[ 0x7f7c73958178] ))))) 0-vol01-client-1: forced unwinding frame type(GlusterFS 4.x v1) op(FSTAT(25)) called at 2018-06-29 07:52:13.106862 (xid=0x46)
[2018-06-29 07:52:13.110530] E [rpc-clnt.c:350:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_l og_callingfn+0x13b)[0x7f7c73b8 ae5b] (--> /lib64/libgfrpc.so.0(+0xce4e)[ 0x7f7c73955e4e] (--> /lib64/libgfrpc.so.0(+0xcf6e)[ 0x7f7c73955f6e] (--> /lib64/libgfrpc.so.0(rpc_clnt_ connection_cleanup+0x8d)[0x7f7 c7395760d] (--> /lib64/libgfrpc.so.0(+0xf178)[ 0x7f7c73958178] ))))) 0-vol01-client-1: forced unwinding frame type(GlusterFS 4.x v1) op(FSTAT(25)) called at 2018-06-29 07:52:13.108103 (xid=0x47)
[2018-06-29 07:52:13.110660] E [rpc-clnt.c:350:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_l og_callingfn+0x13b)[0x7f7c73b8 ae5b] (--> /lib64/libgfrpc.so.0(+0xce4e)[ 0x7f7c73955e4e] (--> /lib64/libgfrpc.so.0(+0xcf6e)[ 0x7f7c73955f6e] (--> /lib64/libgfrpc.so.0(rpc_clnt_ connection_cleanup+0x8d)[0x7f7 c7395760d] (--> /lib64/libgfrpc.so.0(+0xf178)[ 0x7f7c73958178] ))))) 0-vol01-client-1: forced unwinding frame type(GlusterFS 4.x v1) op(FSTAT(25)) called at 2018-06-29 07:52:13.108969 (xid=0x48)
[2018-06-29 07:52:13.110793] E [rpc-clnt.c:350:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_l og_callingfn+0x13b)[0x7f7c73b8 ae5b] (--> /lib64/libgfrpc.so.0(+0xce4e)[ 0x7f7c73955e4e] (--> /lib64/libgfrpc.so.0(+0xcf6e)[ 0x7f7c73955f6e] (--> /lib64/libgfrpc.so.0(rpc_clnt_ connection_cleanup+0x8d)[0x7f7 c7395760d] (--> /lib64/libgfrpc.so.0(+0xf178)[ 0x7f7c73958178] ))))) 0-vol01-client-1: forced unwinding frame type(GlusterFS 4.x v1) op(FXATTROP(34)) called at 2018-06-29 07:52:13.109256 (xid=0x49)
The message "W [MSGID: 114031] [client-rpc-fops_v2.c:1260:client4_0_fstat_cbk] 0-vol01-client-1: remote operation failed [Transport endpoint is not connected]" repeated 3 times between [2018-06-29 07:52:13.110250] and [2018-06-29 07:52:13.110675]
[2018-06-29 07:52:13.110808] W [MSGID: 114031] [client-rpc-fops_v2.c:1570:client4_0_fxattrop_cbk] 0-vol01-client-1: remote operation failed
[2018-06-29 07:52:13.110933] E [rpc-clnt.c:350:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_l og_callingfn+0x13b)[0x7f7c73b8 ae5b] (--> /lib64/libgfrpc.so.0(+0xce4e)[ 0x7f7c73955e4e] (--> /lib64/libgfrpc.so.0(+0xcf6e)[ 0x7f7c73955f6e] (--> /lib64/libgfrpc.so.0(rpc_clnt_ connection_cleanup+0x8d)[0x7f7 c7395760d] (--> /lib64/libgfrpc.so.0(+0xf178)[ 0x7f7c73958178] ))))) 0-vol01-client-1: forced unwinding frame type(GlusterFS 4.x v1) op(FXATTROP(34)) called at 2018-06-29 07:52:13.109284 (xid=0x4a)
The message "W [MSGID: 114031] [client-rpc-fops_v2.c:1570:client4_0_fxattrop_cbk] 0-vol01-client-1: remote operation failed" repeated 3 times between [2018-06-29 07:52:13.110808] and [2018-06-29 07:52:14.878874]
[2018-06-29 07:52:14.878895] E [MSGID: 114031] [client-rpc-fops_v2.c:1352:client4_0_finodelk_cbk] 0-vol01-client-1: remote operation failed [Transport endpoint is not connected]
[2018-06-29 07:52:14.879095] I [MSGID: 108006] [afr-common.c:5444:afr_local_init] 0-vol01-replicate-0: no subvolumes up
[2018-06-29 07:52:14.879217] W [fuse-bridge.c:2427:fuse_writev_cbk] 0-glusterfs-fuse: 76: WRITE => -1 gfid=a8848be6-1c05-4a72-88a7-5 209eeef7c2c fd=0x7f7c50023a68 (Transport endpoint is not connected)
[2018-06-29 07:52:14.880509] W [fuse-bridge.c:1406:fuse_err_cbk] 0-glusterfs-fuse: 77: FLUSH() ERR => -1 (Transport endpoint is not connected)
The message "I [MSGID: 108006] [afr-common.c:5444:afr_local_init] 0-vol01-replicate-0: no subvolumes up" repeated 11 times between [2018-06-29 07:52:14.879095] and [2018-06-29 07:52:17.908056]
[2018-06-29 07:52:17.908066] E [MSGID: 101046] [dht-common.c:1501:dht_lookup_dir_cbk] 0-vol01-dht: dict is null
[2018-06-29 07:52:17.908175] W [fuse-bridge.c:896:fuse_attr_cbk] 0-glusterfs-fuse: 79: LOOKUP() / => -1 (Transport endpoint is not connected)
[2018-06-29 07:52:18.758975] I [MSGID: 108006] [afr-common.c:5444:afr_local_init] 0-vol01-replicate-0: no subvolumes up
[2018-06-29 07:52:18.759155] I [MSGID: 108006] [afr-common.c:5444:afr_local_init] 0-vol01-replicate-0: no subvolumes up
[2018-06-29 07:52:18.759165] E [MSGID: 101046] [dht-common.c:1501:dht_lookup_dir_cbk] 0-vol01-dht: dict is null
[2018-06-29 07:52:18.759214] W [fuse-bridge.c:896:fuse_attr_cbk] 0-glusterfs-fuse: 80: LOOKUP() / => -1 (Transport endpoint is not connected)
[2018-06-29 07:52:21.662626] I [MSGID: 108006] [afr-common.c:5444:afr_local_init] 0-vol01-replicate-0: no subvolumes up
[2018-06-29 07:52:21.662756] I [MSGID: 108006] [afr-common.c:5444:afr_local_init] 0-vol01-replicate-0: no subvolumes up