I did set it intentionally because I found a case where files would be missed during geo-replication. Xsync seemed to handle the case better. The issue was when you bring the “Active” node down that is handling the geo-replication session, and it’s set
to ChangeLog as the change method. Any files that are written into the cluster while geo-replication is down (eg, while the geo-replication session is being failed to another node), are missed / skipped, and won’t ever be transferred to the other cluster.
Is this the expected behavior? If not, then I can open a bug on it.
-CJ
From: Venky Shankar <yknev.shankar@xxxxxxxxx>
Date: Wednesday, April 16, 2014 at 4:43 PM To: CJ Beck <chris.beck@xxxxxxxxxxx> Cc: "gluster-users@xxxxxxxxxxx" <gluster-users@xxxxxxxxxxx> Subject: Re: [Gluster-users] Question about geo-replication and deletes in 3.5 beta train On Thu, Apr 17, 2014 at 3:01 AM, CJ Beck
<chris.beck@xxxxxxxxxxx> wrote:
Was that set intentionally? Setting this as the main change detection mechanism would crawl the filesystem every 60 seconds to replicate the changes. Changelog mode handles live changes,
so any deletes that were performed before this option was set would not be propagated.
As of now, no. With distributed geo-replication, the geo-rep daemon crawls the bricks (instead of the mount). Since the brick would have a subset of the file system entities (for e.g.
in a distributed volume), it's hard to find out purged entries without having to crawl the mount and comparing the entries b/w master and slave (which is slow). This is where changelog mode helps.
|
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://supercolony.gluster.org/mailman/listinfo/gluster-users