Hi Raghavendra and Ben,
thanks for your answers. The volume is a backend of nova instances of Openstack infrastructure, and as wrote by Raghavendra not seems, but I'm sure the compute node has been writing to gluster volume after a potential network problem, but in our monitoring system we did not see the network problem and if there was it could be there just for a while. So the timeline could be: - nova write/read to/from volume volume-nova-pp - network problem during 1 second - report in gluster log of network ptoblem (first part of log): [2014-10-10 07:29:43.730792] W [socket.c:522:__socket_rwv] 0-glusterfs: readv on 192.168.61.100:24007 failed (No data available) [2014-10-10 07:29:54.022608] E [socket.c:2161:socket_connect_finish] 0-glusterfs: connection to 192.168.61.100:24007 failed (Connection refused) [2014-10-10 07:30:05.271825] W [client-rpc-fops.c:866:client3_3_writev_cbk] 0-volume-nova-pp-client-0: remote operation failed: Input/output error [2014-10-10 07:30:08.783145] W [fuse-bridge.c:2201:fuse_writev_cbk] 0-glusterfs-fuse: 3661260: WRITE => -1 (Input/output error) - nova write/read to/from volume volume-nova-pp - second part of log million of lines like this: [2014-10-15 14:41:15.895105] W [fuse-bridge.c:2201:fuse_writev_cbk] 0-glusterfs-fuse: 951700230: WRITE => -1 (Transport endpoint is not connected) For Ben: I'm using gluster 3.5.2 not gluster 3.6, am I try to use the gluster 3,6? It should be a very good things if in gluster will be e option to rate-limit a particular logging call or per unit of time or when the log size overtake a prefixed limit. I think in this particular case the WARNING should be write 1 time for minute after the first 1000 similar lines. Cheers Sergio On 10/27/2014 05:32 PM, Raghavendra G wrote:
|
_______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://supercolony.gluster.org/mailman/listinfo/gluster-devel