Re: geo-replication initial setup with existing data

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Geo-replication expects the gfids (unique identifier similar to inode number in backend file systems) to be same
for a file both on master and slave gluster volume. If the data is directly copied by other means other than geo-replication,
gfid will be different. The crashes you are seeing is because of that.

If the data is not huge, I would recommend to sync the data from master volume using geo-replication. The other way
is to copy directly to slave and set the gfid at the backend for each file to be same as in master volume and then setup
geo-replication. To do the latter, follow below steps. (Note that it's not tested extensively)

  1. Run the following commands on any one of the master nodes:
    # cd /usr/share/glusterfs/scripts/ 
    # sh generate-gfid-file.sh localhost:${master-vol} $PWD/get-gfid.sh /tmp/tmp.atyEmKyCjo/upgrade-gfid-values.txt 
    # scp /tmp/tmp.atyEmKyCjo/upgrade-gfid-values.txt root@${slavehost}:/tmp/
  2. Run the following commands on a slave node:
    # cd /usr/share/glusterfs/scripts/ 
    # sh slave-upgrade.sh localhost:${slave-vol} /tmp/tmp.atyEmKyCjo/upgrade-gfid-values.txt $PWD/gsync-sync-gfid
  3. Setup geo-rep and start

Thanks,
Kotresh HR

On Mon, Jan 22, 2018 at 4:55 PM, <tino_maier@xxxxxx> wrote:
Hello everyone,
 
i was searching for a replacement of my rsync dublication of data to second server and ended up with gluster geo-replicaiton. But after reading the documentation and lots of websites I'm still unsure how to setup a geo-replication without retransfering all data. 
I succeeded in converting my existing data folder to a gluster volume by creating a volume and using the "find /media/brick -noleaf -print0 | xargs --null stat" inside the mounted gluster volume folder on the master.
But how to I have to prepare the slave? I try to do it in the same way as with the master, but this will only result in error messages like
[2018-01-22 11:17:05.209027] E [repce(/media/brick):209:__call__] RepceClient: call failed on peer call=27401:140641732232960:1516619825.12        method=entry_ops        error=OSError
[2018-01-22 11:17:05.209497] E [syncdutils(/media/brick):331:log_raise_exception] <top>: FAIL:
Traceback (most recent call last):
  File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py", line 210, in main
    main_i()
  File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py", line 801, in main_i
    local.service_loop(*[r for r in [remote] if r])
  File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/resource.py", line 1670, in service_loop
    g1.crawlwrap(_oneshot_=True, register_time=register_time)
  File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 597, in crawlwrap
    self.crawl()
  File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 1555, in crawl
    self.process([item[1]], 0)
  File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 1204, in process
    self.process_change(change, done, retry)
  File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 1114, in process_change
    failures = self.slave.server.entry_ops(entries)
  File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/repce.py", line 228, in __call__
    return self.ins(self.meth, *a)
  File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/repce.py", line 210, in __call__
    raise res
OSError: [Errno 42] .gfid/00000000-0000-0000-0000-000000000001
I removed now all extended attributes on the slave and deleted the .glusterfs folder in the brick, so the system is hopefully in the inital state again.
Is there any way to setup a geo-replication session without resyncing all data by gluster? Because this will take month with my poor connection over here. I'm using gluster 3.13.1 on two Ubuntu 16.04.3 LTS hosts.
 
I hope someone can help me with some hints, thanks and best regards
Tino
 
 

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users



--
Thanks and Regards,
Kotresh H R
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux