I've encountered the same issue, however in my case it seem to
have been caused by a bug in the kernel that was present between
4.4.0-58 - 4.4.0-63
(https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1655842),
seeing how you are running 4.4.0-62 I would suggest upgrading and
see if the error persists.
Edvin Ekström,
On 2017-04-26 09:09, Amudhan P wrote:
I did volume start force and now self-heal daemon
is up on the node which was down.
But bitrot has triggered crawling process on all node now,
why was it crawling disk again? if the process is running
already.
[output from bitd.log]
[2017-04-13 06:01:23.930089] I
[glusterfsd-mgmt.c:1778:mgmt_getspec_cbk] 0-glusterfs: No
change in volfile, continuing
[2017-04-26 06:51:46.998935] I [MSGID: 100030]
[glusterfsd.c:2460:main] 0-/usr/local/sbin/glusterfs:
Started running /usr/local/sbin/glusterfs version 3.10.1
(args: /usr/local/sbin/glusterfs -s localhost --volfile-id
gluster/bitd -p /var/lib/glusterd/bitd/run/bitd.pid -l
/var/log/glusterfs/bitd.log -S
/var/run/gluster/02f1dd346d47b9006f9bf64e347338fd.socket
--global-timer-wheel)
[2017-04-26 06:51:47.002732] I [MSGID: 101190]
[event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll:
Started thread with index 1
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users
|
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users