Re: Geo-Replication Issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thank you for the advice. After re-compiling gluster with the xml option, I was able to get geo-replication started!

Is this output normal? This is a 2x2 distributed/replicated volume:
]# gluster volume geo-rep shares gfs-a-bkp::bkpshares status

MASTER NODE                     MASTER VOL    MASTER BRICK                     SLAVE                   STATUS     CHECKPOINT STATUS    CRAWL STATUS
----------------------------------------------------------------------------------------------------------------------------------------------------
gfs-a-2                         shares        /mnt/a-2-shares-brick-1/brick    gfs-a-bkp::bkpshares    Active     N/A                  Hybrid Crawl
gfs-a-2                         shares        /mnt/a-2-shares-brick-2/brick    gfs-a-bkp::bkpshares    Active     N/A                  Hybrid Crawl
gfs-a-2                         shares        /mnt/a-2-shares-brick-3/brick    gfs-a-bkp::bkpshares    Active     N/A                  Hybrid Crawl
gfs-a-2                         shares        /mnt/a-2-shares-brick-4/brick    gfs-a-bkp::bkpshares    Active     N/A                  Hybrid Crawl
gfs-a-3                         shares        /mnt/a-3-shares-brick-1/brick    gfs-a-bkp::bkpshares    Passive    N/A                  N/A
gfs-a-3                         shares        /mnt/a-3-shares-brick-2/brick    gfs-a-bkp::bkpshares    Passive    N/A                  N/A
gfs-a-3                         shares        /mnt/a-3-shares-brick-3/brick    gfs-a-bkp::bkpshares    Passive    N/A                  N/A
gfs-a-3                         shares        /mnt/a-3-shares-brick-4/brick    gfs-a-bkp::bkpshares    Passive    N/A                  N/A
gfs-a-4                         shares        /mnt/a-4-shares-brick-1/brick    gfs-a-bkp::bkpshares    Passive    N/A                  N/A
gfs-a-4                         shares        /mnt/a-4-shares-brick-2/brick    gfs-a-bkp::bkpshares    Passive    N/A                  N/A
gfs-a-4                         shares        /mnt/a-4-shares-brick-3/brick    gfs-a-bkp::bkpshares    Passive    N/A                  N/A
gfs-a-4                         shares        /mnt/a-4-shares-brick-4/brick    gfs-a-bkp::bkpshares    Passive    N/A                  N/A
gfs-a-1                         shares        /mnt/a-1-shares-brick-1/brick    gfs-a-bkp::bkpshares    Active     N/A                  Hybrid Crawl
gfs-a-1                         shares        /mnt/a-1-shares-brick-2/brick    gfs-a-bkp::bkpshares    Active     N/A                  Hybrid Crawl
gfs-a-1                         shares        /mnt/a-1-shares-brick-3/brick    gfs-a-bkp::bkpshares    Active     N/A                  Hybrid Crawl
gfs-a-1                         shares        /mnt/a-1-shares-brick-4/brick    gfs-a-bkp::bkpshares    Active     N/A                  Hybrid Crawl

What I mean to say is, is it normal for two of the nodes to be in active mode and two of the nodes to be in passive mode? I'm thinking the answer is yes due to the distributed/replicated nature, but would like some confirmation of that.

Cheers,
Dave 

On Thu, Dec 11, 2014 at 12:19 PM, Aravinda <avishwan@xxxxxxxxxx> wrote:
Geo-replication depends on the xml output of Gluster Cli commands. For example, before connecting to slave nodes it gets the nodes list from both master and slave using gluster volume info and status commands with --xml.

The Python tracebacks you are seeing in logs are due to inability to parse the output of gluster commands when xml is not supported.

--
regards
Aravinda
http://aravindavk.in



On 12/11/2014 07:56 PM, David Gibbons wrote:
Thanks for the feedback, answers inline below:
 
   Have you followed all the upgrade steps w.r.t geo-rep
   mentioned in the following link?

I didn't upgrade geo-rep, I disconnected the old replicated server and started from scratch. So everything with regard to geo-rep is fresh/brand-new.
 
2. Does the output of command 'gluster vol info <vol-name> --xml' proper ?
   Please paste the output.

I do not have gluster compiled with xml. Perhaps that is the problem. Here is the output of the command you referenced:
XML output not supported. Ignoring '--xml' option

This is my config summary:

GlusterFS configure summary
===========================
FUSE client          : yes
Infiniband verbs     : no
epoll IO multiplex   : yes
argp-standalone      : no
fusermount           : yes
readline             : no
georeplication       : yes
Linux-AIO            : no
Enable Debug         : no
systemtap            : no
Block Device xlator  : no
glupy                : no
Use syslog           : yes
XML output           : no
QEMU Block formats   : no
Encryption xlator    : no

Am I missing something that is require for geo-replication? I've found the documentation for those of us who are building the binaries to be a bit lacking with regard to dependencies within the project.

Cheers,
Dave



_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux