On 11/15/2016 03:46 AM, Shirwa Hersi
wrote:
Hi,
I'm using glusterfs
geo-replication on version 3.7.11, one of the bricks becomes
faulty and does not replicated to slave bricks after i start
geo-replication session.
Following are the logs related to
the faulty brick, can someone please advice me on how to
resolve this issue.
[2016-06-11 09:41:17.359086] E [syncdutils(/var/glusterfs/gluster_b2/brick):276:log_raise_exception] <top>: FAIL:
Traceback (most recent call last):
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 166, in main
main_i()
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 663, in main_i
local.service_loop(*[r for r in [remote] if r])
File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 1497, in service_loop
g3.crawlwrap(_oneshot_=True)
File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 571, in crawlwrap
self.crawl()
File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1201, in crawl
self.changelogs_batch_process(changes)
File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1107, in changelogs_batch_process
self.process(batch)
File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 984, in process
self.datas_in_batch.remove(unlinked_gfid)
KeyError: '.gfid/757b0ad8-b6f5-44da-b71a-1b1c25a72988'
The bug mentioned is fixed in upstream. Refer this link:
http://www.gluster.org/pipermail/bugs/2016-June/061785.html
You can update gluster to get the fix.
Alternatively, you can try to restart geo-rep session using "start force" to overcome the error.
But updating is better.
Thanks,
Saravana
|
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users