Why does geo-replication stop when a replica member goes down

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

We are testing glusterfs. We have a setup like this:

Site A: 4 nodes, 2 bricks per node, 1 volume, distributed, replicated,
replica count 2
Site B: 2 nodes, 2 bricks per node, 1 volume, distributed
georeplication setup: master: site A, node 1. slave:site B, node 1, ssh

replicasets on Site A:
node 1, brick 1 + node 3, brick 1
node 2, brick 1 + node 4, brick 1
node 2, brick 2 + node 3, brick 2
node 1, brick 2 + node 4, brick 2

I monitor geo replication status with command:
watch -n 1 gluster volume geo-replication status

All is OK.

I stop glusterd service on node 3: I see geo replication status go to faulty
I start glusterd service on node 3: I see geo replication status go to OK,
after some time

Question:
With the removal of node 3, members of 2 different replica sets go down,
but the volume as a whole is healthy. Why does geo replication go to faulty
in this testcase?

Fred
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20121115/768c39b4/attachment.html>


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux