Hello,
We've set up geo-replication but it isn't actually syncing. Scenario is that we have two GFS clusters. Cluster A has nodes cafs10, cafs20, and cafs30, replicating with each other over a LAN. Cluster B has nodes nvfs10, nvfs20, and nvfs30 also replicating with each other over a LAN. We are geo-replicating data from the A cluster to the B cluster over the internet. SSH key access is set up, allowing all the A nodes password-less access to root on nvfs10
Geo-replication was set up using these commands, run on cafs10:
gluster volume geo-replication gvol0
nvfs10.example.com::gvol0 create ssh-port 8822 no-verify
gluster volume geo-replication gvol0
nvfs10.example.com::gvol0 config remote-gsyncd /usr/lib/x86_64-linux-gnu/glusterfs/gsyncd
However after a very short period of the status being "Initializing..." the status then sits on "Passive":
# gluster volume geo-replication gvol0
nvfs10.example.com::gvol0 status
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
------------------------------------------------------------------------------------------------------------------------------------------------------------------
cafs10 gvol0 /nodirectwritedata/gluster/gvol0 root
nvfs10.example.com::gvol0 nvfs30.local Passive N/A N/A
cafs30 gvol0 /nodirectwritedata/gluster/gvol0 root
nvfs10.example.com::gvol0 N/A Created N/A N/A
cafs20 gvol0 /nodirectwritedata/gluster/gvol0 root
nvfs10.example.com::gvol0 N/A Created N/A N/A
So my questions are:
1. Why does the status on cafs10 mention "nvfs30.local"? That's the LAN address that nvfs10 replicates with nvfs30 using. It's not accessible from the A cluster, and I didn't use it when configuring geo-replication.
2. Why does geo-replication sit in Passive status?
Thanks very much for any assistance.