Cool. We will work on resolving Ubuntu packaging issues. Thanks for
reporting.
Data sync always happens via Gluster Mount, Connecting Slave node is
just for distributing the load. Each Master nodes randomly connect
to available Slave nodes but sync happens via mount.(It is not data
copy Brick to Brick)
Since Master has 4 nodes and Slave has 2 nodes. 2 Master nodes are
connected to 1 Slave node.
regards
Aravinda
On 10/07/2015 05:43 AM, Wade
Fitzpatrick wrote:
So I ran the following on palace and madonna and it appears to be
syncing properly now
root@palace:~# sed -i -e
's:/usr/libexec/glusterfs/gsyncd:/usr/lib/x86_64-linux-gnu/glusterfs/gsyncd:'
.ssh/authorized_keys
root@james:~/georepsetup# gluster volume geo-replication
status
MASTER NODE MASTER VOL MASTER
BRICK SLAVE USER SLAVE
SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
------------------------------------------------------------------------------------------------------------------------------------------------------
james static
/data/gluster1/static/brick1 root ssh://palace::static palace
Active Hybrid Crawl N/A
cupid static
/data/gluster1/static/brick2 root ssh://palace::static
madonna Passive N/A N/A
hilton static
/data/gluster1/static/brick3 root ssh://palace::static palace
Passive N/A N/A
present static
/data/gluster1/static/brick4 root ssh://palace::static
madonna Passive N/A N/A
I am still a little confused by the SLAVE column though - should
"palace" in that column really be an A record such as
"gluster-remote" that has 2 addresses (for both palace and
madonna)?
On 7/10/2015 8:10 am, Wade
Fitzpatrick wrote:
Thanks for the response, it looks like the problem is
[2015-10-06 19:08:49.547874] E
[resource(/data/gluster1/static/brick1):222:errlog] Popen:
command
"ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
/var/lib/glusterd/geo-replication/secret.pem
-oControlMaster=auto -S
/tmp/gsyncd-aux-ssh-RX3NRr/eb9c0906d6265193f403278fc1489309.sock
root@palace /nonexistent/gsyncd --session-owner
3f9f810d-a988-4914-a5ca-5bd7b251a273 -N --listen --timeout 120
gluster://localhost:static" returned with 127, saying:
[2015-10-06 19:08:49.547976] E
[resource(/data/gluster1/static/brick1):226:logerr] Popen:
ssh>
This system is monitored and logged. Any unauthorized usage will
be
prosecuted.
[2015-10-06 19:08:49.548062] E
[resource(/data/gluster1/static/brick1):226:logerr] Popen:
ssh>
bash: /usr/libexec/glusterfs/gsyncd: No such file or directory
All servers are Ubuntu 15.04 running glusterfs 3.7.4 built on
Sep
1 2015 12:08:58 but /usr/libexec/glusterfs does not exist so
is
this a packaging error?
On 6/10/2015 6:11 pm, Aravinda
wrote:
Please share the tracebacks/errors from the logs(Master nodes)
/var/log/glusterfs/geo-replication/static/*.log
regards
Aravinda
On 10/06/2015 01:19 PM, Wade
Fitzpatrick wrote:
I am trying to set up geo-replication of a striped-replicate
volume. I used https://github.com/aravindavk/georepsetup
to configure the replication.
root@james:~# gluster volume info
Volume Name: static
Type: Striped-Replicate
Volume ID: 3f9f810d-a988-4914-a5ca-5bd7b251a273
Status: Started
Number of Bricks: 1 x 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: james:/data/gluster1/static/brick1
Brick2: cupid:/data/gluster1/static/brick2
Brick3: hilton:/data/gluster1/static/brick3
Brick4: present:/data/gluster1/static/brick4
Options Reconfigured:
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on
performance.readdir-ahead: on
root@james:~# gluster volume geo-replication status
MASTER NODE MASTER VOL MASTER BRICK
SLAVE USER SLAVE SLAVE NODE
STATUS
CRAWL STATUS LAST_SYNCED
------------------------------------------------------------------------------------------------------------------------------------------------------
james static /data/gluster1/static/brick1
root ssh://palace::static
N/A Created N/A N/A
cupid static /data/gluster1/static/brick2
root ssh://palace::static
N/A Created N/A N/A
hilton static /data/gluster1/static/brick3
root ssh://palace::static
N/A Created N/A N/A
present static /data/gluster1/static/brick4
root ssh://palace::static
N/A Created N/A N/A
So of the 4 bricks, data is striped over brick1 and brick3,
also
brick1=brick2 is a mirror and brick3=brick4 is a mirror.
Therefore I have no need to geo-replicate bricks 2 and 4.
At the other site, palace and madonna form a stripe volume
(no
replication):
root@palace:~# gluster volume info
Volume Name: static
Type: Stripe
Volume ID: 0e91c6f2-3499-4fc4-9630-9da8b7f57db5
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: palace:/data/gluster1/static/brick1
Brick2: madonna:/data/gluster1/static/brick2
Options Reconfigured:
performance.readdir-ahead: on
However, when I try to start geo-replication, it fails as
below.
root@james:~# gluster volume geo-replication static ssh://palace::static start
Starting geo-replication session between static & ssh://palace::static has
been
successful
root@james:~# gluster volume geo-replication status
MASTER NODE MASTER VOL MASTER BRICK
SLAVE USER SLAVE SLAVE NODE
STATUS
CRAWL STATUS LAST_SYNCED
-----------------------------------------------------------------------------------------------------------------------------------------------------
james static /data/gluster1/static/brick1
root ssh://palace::static
N/A Faulty N/A N/A
cupid static /data/gluster1/static/brick2
root ssh://palace::static
N/A Faulty N/A N/A
hilton static /data/gluster1/static/brick3
root ssh://palace::static
N/A Faulty N/A N/A
present static /data/gluster1/static/brick4
root ssh://palace::static
N/A Faulty N/A N/A
What should I do to set this up properly so that
james:/data/gluster1/static/brick1 gets replicated to
palace:/data/gluster1/static/brick1 ; and
hilton:/data/gluster1/static/brick3 gets replicated to
madonna:/data/gluster1/static/brick2 ???
--
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users
--
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users
|
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users