Version: glusterfs-server-3.4.2-1.el6.x86_64
I have an issue where I'm not getting the correct status for geo-replication, this is shown below. Also I've had issues where I've not been able to stop geo-replication without using a firewall rule on the slave. I would get back a cryptic error and nothing useful in the logs.
# gluster volume geo-replication status
NODE MASTER SLAVE STATUS
---------------------------------------------------------------------------------------------------
ovirt001.miovision.corp rep1 gluster://10.0.11.4:/rep1 faulty
ovirt001.miovision.corp miofiles gluster://10.0.11.4:/miofiles faulty
# gluster volume geo-replication rep1 gluster://10.0.11.4:/rep1 start
geo-replication session between rep1 & gluster://10.0.11.4:/rep1 already started
geo-replication command failed
[root@ovirt001 ~]# gluster volume geo-replication status
NODE MASTER SLAVE STATUS
---------------------------------------------------------------------------------------------------
ovirt001.miovision.corp rep1 gluster://10.0.11.4:/rep1 faulty
ovirt001.miovision.corp miofiles gluster://10.0.11.4:/miofiles faulty
How can I manually remove a geo-rep agreement?
Thanks,
Steve
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://supercolony.gluster.org/mailman/listinfo/gluster-users