On 07/21/2017 02:55 PM, yayo (j) wrote:
Because pending self-heals come into the picture when I/O from the clients (mounts) do not succeed on some bricks. They are mostly due to (a) the client losing connection to some bricks (likely), (b) the I/O failing on the bricks themselves (unlikely). If most of the i/o is also going to the 3rd brick (since you say the files are already present on all bricks and I/O is successful) , then it is likely to be (a). In the fuse mount logs for the engine volume, check if there are any messages for brick disconnects. Something along the lines of "disconnected from volname-client-x". Just guessing here, but maybe even the 'data' volume did experience disconnects and self-heals later but you did not observe it when you ran heal info. See the glustershd log or mount log for for self-heal completion messages on 0-data-replicate-0 also. Regards, Ravi
|
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-users