distributed-replicated pool geo-replication to distributed-only pool only syncing to one slave node

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The subject line is a mouthful, but pretty much says it all.

apivision:~$ sudo gluster volume geo-replication MIXER svc-mountbroker@trident24::DR-MIXER status
 
MASTER NODE    MASTER VOL    MASTER BRICK            SLAVE USER         SLAVE                                  SLAVE NODE    STATUS     CRAWL STATUS     LAST_SYNCED                  
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
apivision      MIXER         /zpuddle/audio/mixer    svc-mountbroker    svc-mountbroker@trident24::DR-MIXER    ua610         Active     History Crawl    2016-02-22 21:45:56          
studer900      MIXER         /zpuddle/audio/mixer    svc-mountbroker    svc-mountbroker@trident24::DR-MIXER    trident24     Passive    N/A              N/A                          
neve88rs       MIXER         /zpuddle/audio/mixer    svc-mountbroker    svc-mountbroker@trident24::DR-MIXER    trident24     Passive    N/A              N/A                          
ssl4000        MIXER         /zpuddle/audio/mixer    svc-mountbroker    svc-mountbroker@trident24::DR-MIXER    ua610         Active     History Crawl    2016-02-22 22:05:53       


This seems to indicate only one of my slave nodes is actively participating in the geo-replication.  It seems wrong to me, or did I misunderstand the new geo-replication feature related to multiple nodes participating in the process?  Can I get it to balance the rsyncs to more than one slave node?

i used georepsetup which, by the way, is a freaking awesome tool that did in a few seconds what I was tearing my hair out to do for days--namely, to get geo-replication working with mountbroker.  But even using simple root geo-replication with manual setup, the balance seemed to fall this way every time on the back end.

Debian 8/Jessie, gluster 3.7.8-1, on zfs, a 119TB volume at each end.  Data is properly distributing in the slave pool (at cursory glance), and in general I’m not aware of anything being outright broken.  Front end replica pairs are apivsion/neve88rs and ssl4000/studer900.

PS it’s in history crawl at the moment due to pausing/resuming geo-replication.
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux