Re: Geo-replication adding new master node

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Aravinda,

We ran a "gluster system:: execute gsec_create" and "georep create push-pem" with force option as suggested, and then a "gluster volume geo-replication ... status" reported the two new master nodes as being in "Created" status. We did a geo-replication "stop" and then "start" and are pleased to see the two new master nodes are now in "Passive" status. Thank you for your help!


On Tue, 1 Jun 2021 at 10:06, David Cunningham <dcunningham@xxxxxxxxxxxxx> wrote:
Hi Aravinda,

Thank you very much - we will give that a try.


On Mon, 31 May 2021 at 20:29, Aravinda VK <aravinda@xxxxxxxxx> wrote:
Hi David,

On 31-May-2021, at 10:37 AM, David Cunningham <dcunningham@xxxxxxxxxxxxx> wrote:

Hello,

We have a GlusterFS configuration with mirrored nodes on the master side geo-replicating to mirrored nodes on the secondary side.

When geo-replication is initially created it seems to automatically add all the mirrored nodes on the master side as geo-replication master nodes, which is fine. My first question is, if we add a new master side node how can we add it as a geo-replication master?
This doesn't seem to happen automatically, according to the output of "gluster volume geo-replication gvol0 secondary::gvol0 status". If we use the normal "gluster volume geo-replication gvol0 secondary::slave-vol create push-pem force" it says that the secondary side volume is not empty, which is true because we're adding a master node to the existing geo-replication.

This is not automatic. Run `gluster-georep-sshkey generate` and georep create push-pem with force option to push the keys from new nodes to secondary nodes. 

You can also try this tool instead of georep create command.


$ gluster-georep-setup gvol0 secondary::slave-vol --force


My second question is whether we can geo-replicate to multiple nodes on the secondary side? Ideally we would normally have something like:
master A -> secondary A
master B -> secondary B
master C -> secondary C
so that any master or secondary node could go offline but geo-replication would keep working.

Geo-replication command needs one Secondary node to establish the session. Once session starts, Geo-rep starts one worker process per master brick.

These worker processes gets the list of secondary nodes by running the `ssh <secondary-host> gluster volume info <secondary-volume>`. Then Geo-rep distributes the secondary nodes connection in round robin way. For example, if Master volume contains three nodes and secondary volume 3 nodes as you mentioned then Geo-rep makes connection as Master A -> Secondary A, Master B -> Secondary B and Master C -> Secondary C.

Secondary node failover: If a node goes down in secondary cluster then Master worker connects to other secondary node and continues replication. One known issue is if the secondary node specified in the Geo-rep create command goes down then it fails to get the Volume info(To get list of secondary nodes). This can be solved by providing the list of secondary nodes as config(Not yet available).


Thank you very much in advance.

--
David Cunningham, Voisonics Limited
http://voisonics.com/
USA: +1 213 221 1092
New Zealand: +64 (0)28 2558 3782
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

Aravinda Vishwanathapura





--
David Cunningham, Voisonics Limited
http://voisonics.com/
USA: +1 213 221 1092
New Zealand: +64 (0)28 2558 3782


--
David Cunningham, Voisonics Limited
http://voisonics.com/
USA: +1 213 221 1092
New Zealand: +64 (0)28 2558 3782
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux