Re: geo-replications invalid names when using rsyncd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Do you see any errors in Master logs? (/var/log/glusterfs/geo-replication/<MASTERVOL>/*.log)

regards
Aravinda

On 10/15/2015 07:51 PM, Brian Ericson wrote:
Thanks!

As near as I can tell, the GlusterFS thinks it's done -- I finally ended up renaming the files myself after waiting a couple of days.

If I take an idle master/slave (no pending writes) and do an rsync to copy a file to the master volume, I can see that the file is otherwise correct (sha1sum of the file on master matches sha1sum of .file.6chars on slave) and that the "last synced" time is bumped. But, for as long as I've been willing to wait, I've yet to see the .file.6chars moved to file.

I'm using
# rpm -qa gluster*
glusterfs-fuse-3.7.5-1.el7.x86_64
glusterfs-3.7.5-1.el7.x86_64
glusterfs-cli-3.7.5-1.el7.x86_64
glusterfs-libs-3.7.5-1.el7.x86_64
glusterfs-api-3.7.5-1.el7.x86_64
glusterfs-geo-replication-3.7.5-1.el7.x86_64
glusterfs-server-3.7.5-1.el7.x86_64
glusterfs-client-xlators-3.7.5-1.el7.x86_64

On 10/15/2015 06:35 AM, Aravinda wrote:
Slave will be eventually consistent. If rsync created temp files in
Master Volume and renamed, that gets recorded in Changelogs(Journal).
Exact same steps will be replayed in Slave Volume. If no errors, Geo-rep
should unlink temp files in Slave and retain actual files.

Let us know if Issue persists even after sometime. Also let us know the
Gluster Version you are using.

regards
Aravinda
http://aravindavk.in

On 10/15/2015 05:20 AM, Brian Ericson wrote:
Admittedly an odd case, but...

o I have simple a simple geo-replication setup:  master -> slave.
o I've mounted the master's volume on the master host.
o I've also setup rsyncd server on the master:
  [master-volume]
         path = /mnt/master-volume
         read only = false
o I now rsync from a client to the master using the rsync protocol:
  rsync file rsync://master/master-volume

What I see is "file" when looking at the master volume, but that's not
I see in the slave volume.  This is what is replicated to the slave:

  .file.6chars

where "6chars" is some random letters & numbers.

I'm pretty sure the .file.6chars version is due to my client's rsync
and represents the name rsync gives the file during transport, after
which it renames it to file.  Is this rename at such a low level
that glusterfs's geo-replication doesn't catch it and doesn't see
that it should be doing a rename?
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

.


_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux