Hello everyone,
i was searching for a replacement of my rsync dublication of data to second server and ended up with gluster geo-replicaiton. But after reading the documentation and lots of websites I'm still unsure how to setup a geo-replication without retransfering all data.
I succeeded in converting my existing data folder to a gluster volume by creating a volume and using the "find /media/brick -noleaf -print0 | xargs --null stat" inside the mounted gluster volume folder on the master.
But how to I have to prepare the slave? I try to do it in the same way as with the master, but this will only result in error messages like
[2018-01-22 11:17:05.209027] E [repce(/media/brick):209:__call__] RepceClient: call failed on peer call=27401:140641732232960:1516619825.12 method=entry_ops error=OSError
[2018-01-22 11:17:05.209497] E [syncdutils(/media/brick):331:log_raise_exception] <top>: FAIL:
Traceback (most recent call last):
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py", line 210, in main
main_i()
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py", line 801, in main_i
local.service_loop(*[r for r in [remote] if r])
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/resource.py", line 1670, in service_loop
g1.crawlwrap(_oneshot_=True, register_time=register_time)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 597, in crawlwrap
self.crawl()
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 1555, in crawl
self.process([item[1]], 0)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 1204, in process
self.process_change(change, done, retry)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 1114, in process_change
failures = self.slave.server.entry_ops(entries)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/repce.py", line 228, in __call__
return self.ins(self.meth, *a)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/repce.py", line 210, in __call__
raise res
OSError: [Errno 42] .gfid/00000000-0000-0000-0000-000000000001
[2018-01-22 11:17:05.209497] E [syncdutils(/media/brick):331:log_raise_exception] <top>: FAIL:
Traceback (most recent call last):
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py", line 210, in main
main_i()
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py", line 801, in main_i
local.service_loop(*[r for r in [remote] if r])
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/resource.py", line 1670, in service_loop
g1.crawlwrap(_oneshot_=True, register_time=register_time)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 597, in crawlwrap
self.crawl()
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 1555, in crawl
self.process([item[1]], 0)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 1204, in process
self.process_change(change, done, retry)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 1114, in process_change
failures = self.slave.server.entry_ops(entries)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/repce.py", line 228, in __call__
return self.ins(self.meth, *a)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/repce.py", line 210, in __call__
raise res
OSError: [Errno 42] .gfid/00000000-0000-0000-0000-000000000001
I removed now all extended attributes on the slave and deleted the .glusterfs folder in the brick, so the system is hopefully in the inital state again.
Is there any way to setup a geo-replication session without resyncing all data by gluster? Because this will take month with my poor connection over here. I'm using gluster 3.13.1 on two Ubuntu 16.04.3 LTS hosts.
Is there any way to setup a geo-replication session without resyncing all data by gluster? Because this will take month with my poor connection over here. I'm using gluster 3.13.1 on two Ubuntu 16.04.3 LTS hosts.
I hope someone can help me with some hints, thanks and best regards
Tino
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-users