Oh ok. I overlooked the status output. Please share the geo-replication
logs from "james" and "hilton" nodes.
regards
Aravinda
On 10/15/2015 05:55 PM, Wade Fitzpatrick wrote:
Well I'm kind of worried about the 3 million failures listed in the
FAILURES column, the timestamp showing that syncing "stalled" 2 days
ago and the fact that only half of the files have been transferred to
the remote volume.
On 15/10/2015 9:27 pm, Aravinda wrote:
Status looks good. Two master bricks are Active and participating in
syncing. Please let us know the issue you are observing.
regards
Aravinda
On 10/15/2015 11:40 AM, Wade Fitzpatrick wrote:
I have twice now tried to configure geo-replication of our
Stripe-Replicate volume to a remote Stripe volume but it always
seems to have issues.
root@james:~# gluster volume info
Volume Name: gluster_shared_storage
Type: Replicate
Volume ID: 5f446a10-651b-4ce0-a46b-69871f498dbc
Status: Started
Number of Bricks: 1 x 4 = 4
Transport-type: tcp
Bricks:
Brick1: james:/data/gluster1/geo-rep-meta/brick
Brick2: cupid:/data/gluster1/geo-rep-meta/brick
Brick3: hilton:/data/gluster1/geo-rep-meta/brick
Brick4: present:/data/gluster1/geo-rep-meta/brick
Options Reconfigured:
performance.readdir-ahead: on
Volume Name: static
Type: Striped-Replicate
Volume ID: 3f9f810d-a988-4914-a5ca-5bd7b251a273
Status: Started
Number of Bricks: 1 x 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: james:/data/gluster1/static/brick1
Brick2: cupid:/data/gluster1/static/brick2
Brick3: hilton:/data/gluster1/static/brick3
Brick4: present:/data/gluster1/static/brick4
Options Reconfigured:
auth.allow: 10.x.*
features.scrub: Active
features.bitrot: on
performance.readdir-ahead: on
geo-replication.indexing: on
geo-replication.ignore-pid-check: on
changelog.changelog: on
root@palace:~# gluster volume info
Volume Name: static
Type: Stripe
Volume ID: 3de935db-329b-4876-9ca4-a0f8d5f184c3
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: palace:/data/gluster1/static/brick1
Brick2: madonna:/data/gluster1/static/brick2
Options Reconfigured:
features.scrub: Active
features.bitrot: on
performance.readdir-ahead: on
root@james:~# gluster vol geo-rep static ssh://gluster-b1::static
status detail
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
SLAVE SLAVE NODE STATUS CRAWL STATUS
LAST_SYNCED ENTRY DATA META FAILURES CHECKPOINT
TIME CHECKPOINT COMPLETED CHECKPOINT COMPLETION TIME
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
james static /data/gluster1/static/brick1 root
ssh://gluster-b1::static 10.37.1.11 Active Changelog Crawl
2015-10-13 14:23:20 0 0 0 1952064 N/A
N/A N/A
hilton static /data/gluster1/static/brick3 root
ssh://gluster-b1::static 10.37.1.11 Active Changelog Crawl
N/A 0 0 0 1008035
N/A N/A N/A
present static /data/gluster1/static/brick4 root
ssh://gluster-b1::static 10.37.1.12 Passive N/A
N/A N/A N/A N/A N/A
N/A N/A N/A
cupid static /data/gluster1/static/brick2 root
ssh://gluster-b1::static 10.37.1.12 Passive N/A
N/A N/A N/A N/A N/A
N/A N/A N/A
So just to clarify, data is striped over bricks 1 and 3; bricks 2
and 4 are the replica.
Can someone help me diagnose the problem and find a solution?
Thanks in advance,
Wade.
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users