Re: [Geo-replication]gluster geo-replication pair lost after reboot nodes with gluster version glusterfs 3.7.13

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Looks like some issue while showing the status. We will look into this. Geo-rep session is safe, issue only while showing the status.

Please try Force Stop Geo-replication and Start

gluster volume geo-replication smb1 110.110.110.14::smb11 stop force
gluster volume geo-replication smb1 110.110.110.14::smb11 start

If issue is not resolved, please share Gluster logs with us.
regards
Aravinda
On Thursday 25 August 2016 07:19 AM, Wei-Ming Lin wrote:
Hi all, 

Now I have three node CS135f55, CS1145c7 and  CS1227ac as geo-replication source cluster,

source volume info as follow :

Volume Name: smb1
Type: Disperse
Volume ID: ccaf6a49-75ba-48cb-821f-4ced8ed01855
Status: Started
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: CS135f55:/export/IFT_lvol_LICSLxEIxq/fs
Brick2: CS1145c7:/export/IFT_lvol_oDC1AuFQDr/fs
Brick3: CS1227ac:/export/IFT_lvol_6JG0HAWa2A/fs
Options Reconfigured:
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on
storage.batch-fsync-delay-usec: 0
server.allow-insecure: on
performance.stat-prefetch: off
cluster.quorum-type: auto
cluster.server-quorum-type: server
disperse.eager-lock: off
performance.write-behind: off
performance.read-ahead: off
performance.quick-read: off
performance.open-behind: off
performance.io-cache: off
nfs.disable: on
server.manage-gids: on
performance.readdir-ahead: off
cluster.enable-shared-storage: enable
cluster.server-quorum-ratio: 51%

# gluster volume geo-replication status : 

MASTER NODE    MASTER VOL    MASTER BRICK                      SLAVE USER    SLAVE                          SLAVE NODE    STATUS     CRAWL STATUS       LAST_SYNCED
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------
CS1227ac       smb1          /export/IFT_lvol_6JG0HAWa2A/fs    root          ssh://110.110.110.14::smb11    CS14b550      Passive    N/A                N/A  
CS135f55       smb1          /export/IFT_lvol_LICSLxEIxq/fs    root          ssh://110.110.110.14::smb11    CS1630aa      Passive    N/A                N/A  
CS1145c7       smb1          /export/IFT_lvol_oDC1AuFQDr/fs    root          ssh://110.110.110.14::smb11    CS154d98      Active     Changelog Crawl    2016-08-25 08:49:26


now when I reboot CS135f55, CS1145c7 and  CS1227ac at same time,

after node all node come back,

I get  geo-replication status again ,

and shows :

"No active geo-replication sessions"

So, if I need to keep my geo-replication conf after source cluster reboot, how can I do?

or is this a limitation for gluster geo-replication now ?

thanks.

Ivan


_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel

_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel

[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux