Re: add geo-replication "passive" node after node replacement

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



OK step 1 and 2 worked fine.
About step 3 I had to use "stop force" otherwise staging failed
because S3 didn't know about the replica and refused to stop it.

Thank you Kotresh,
Stefano

On 7 February 2018 at 14:42, Kotresh Hiremath Ravishankar
<khiremat@xxxxxxxxxx> wrote:
> Hi,
>
> When S3 is added to master volume from new node, the following cmd should be
> run to generate and distribute ssh keys
>
> 1. Generate ssh keys from new node
>
>        #gluster system:: execute gsec_create
>
> 2. Push those ssh keys of new node to slave
>
>       #gluster vol geo-rep <mastervol> <slavehost>::<slavevol> create
> push-pem force
>
> 3. Stop and start geo-rep
>
>
> But note that while removing brick and adding a brick, you should make sure
> the data from the brick being removed is synced
> to slave.
>
> Thanks,
> Kotresh HR
>
>
>
>
>
> On Wed, Feb 7, 2018 at 4:21 PM, Stefano Bagnara <lists@xxxxxxxx> wrote:
>>
>> Hi all,
>>
>> i had a replica 2 gluster 3.12 between S1 and S2 (1 brick per node)
>> geo-replicated to S5 where both S1 and S2 were visible in the
>> geo-replication status and S2 "active" while S1 "passive".
>>
>> I had to replace S1 with S3, so I did an
>> "add-brick  replica 3 S3"
>> and then
>> "remove-brick replica 2 S1".
>>
>> Now I have again a replica 2 gluster between S3 and S2 but the
>> geo-replica only show S2 as active and no other peer involved. So it
>> seems S3 does not know about the geo-replica and it is not ready to
>> geo-replicate in case S2 goes down.
>>
>> Here was the original geo-rep status
>>
>> # gluster volume geo-replication status
>>
>> MASTER NODE    MASTER VOL    MASTER BRICK       SLAVE USER    SLAVE
>>                            SLAVE NODE    STATUS     CRAWL STATUS
>> LAST_SYNCED
>>
>> ----------------------------------------------------------------------------------------------------------------------------------------------------------------
>> S2     sharedvol     /home/sharedvol    root
>> ssh://S5::sharedvolslave    S5    Passive    N/A                N/A
>> S1     sharedvol     /home/sharedvol    root
>> ssh://S5::sharedvolslave    S5    Active     Changelog Crawl
>> 2018-02-07 10:18:57
>>
>> Here is the new geo-replication stauts
>>
>> # gluster volume geo-replication status
>>
>> MASTER NODE    MASTER VOL    MASTER BRICK       SLAVE USER    SLAVE
>>                            SLAVE NODE    STATUS    CRAWL STATUS
>> LAST_SYNCED
>>
>> ---------------------------------------------------------------------------------------------------------------------------------------------------------------
>> S2     sharedvol     /home/sharedvol    root
>> ssh://S5::sharedvolslave    S5    Active    Changelog Crawl
>> 2018-02-07 11:48:31
>>
>>
>> How can I add S3 as a passive node in the geo-replica to S5 ??
>>
>> Thank you,
>> Stefano
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users@xxxxxxxxxxx
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
>
> --
> Thanks and Regards,
> Kotresh H R
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux