Hello, I have a gluster volume set up using geo-replication on two slaves however I'm seeing inconsistent status output on the slave nodes. Here is the status shown by gluster volume geo-replication status on each node. [root@foo-gluster-srv3 ~]# gluster volume geo-replication status MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- foo-gluster-srv1 gv0 /var/mnt/gluster/brick2 root ssh://foo-gluster-srv3::slavevol foo-gluster-srv3 Active Changelog Crawl 2017-10-04 11:04:27 foo-gluster-srv2 gv0 /var/mnt/gluster/brick root ssh://foo-gluster-srv3::slavevol foo-gluster-srv3 Passive N/A N/A foo-gluster-srv1 gv0 /var/mnt/gluster/brick2 root ssh://foo-gluster-srv4::slavevol foo-gluster-srv4 Active Changelog Crawl 2017-10-04 11:04:27 foo-gluster-srv2 gv0 /var/mnt/gluster/brick root ssh://foo-gluster-srv4::slavevol foo-gluster-srv4 Passive N/A N/A [root@foo-gluster-srv4 ~]# gluster volume geo-replication status No active geo-replication sessions Replication to srv4 *is* working despite what the status shows. The geo-replication logs on this host are not showing any errors either. Does anybody know what would cause this or how to fix it? _______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-users