Resync or how to force the replication

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 hi thanks for the quick answer.

I'm running glusterfs 3.4.1

[root at nas-02 datastore]# gluster volume start datastore1 force

volume start: datastore1: failed: Failed to get extended attribute
trusted.glusterfs.volume-id for brick dir /datastore. Reason : No data
available

It seems that the .gluster directory is missing for some reason.


volume replace-brick datastore1 nas-01-data:/datastore nas-02-data:/datastore
commit force


To rebuild/replace the missing brick ?

I'm quite new with glusterfs


Thanks






On 26/11/13 12:47, gandalf istari wrote:

Hi have setup a two node replication glusterfs. After the initial
installation the "master" node was put into the datacenter and after two
week we moved the second one also to the datacenter.

 But the sync has not started yet.

 On the "master"

gluster> volume info all

Volume Name: datastore1

Type: Replicate

Volume ID: fdff5190-85ef-4cba-9056-a6bbbd8d6863

Status: Started

Number of Bricks: 1 x 2 = 2

Transport-type: tcp

Bricks:

Brick1: nas-01-data:/datastore

Brick2: nas-02-data:/datastore

gluster> peer status

Number of Peers: 1


 Hostname: nas-02-data

Uuid: 71df9f86-a87b-481d-896c-c0d4ab679cfa

 State: Peer in Cluster (Connected)


 On the "slave"

gluster> peer status

Number of Peers: 1

Hostname: 192.168.70.6

Uuid: 97ef0154-ad7b-402a-b0cb-22be09134a3c

 State: Peer in Cluster (Connected)


 gluster> volume status all

Status of volume: datastore1

Gluster process   Port Online Pid

------------------------------------------------------------------------------

Brick nas-01-data:/datastore  49152 Y 2130

Brick nas-02-data:/datastore  N/A N N/A

NFS Server on localhost   2049 Y 8064

Self-heal Daemon on localhost  N/A Y 8073

NFS Server on 192.168.70.6  2049 Y 3379

Self-heal Daemon on 192.168.70.6  N/A Y 3384

Which version of glusterfs are you running?

volume status suggests that the second brick (nas-02-data:/datastore) is
not running.

Can you run "gluster volume start <volname> force" in any of these two
nodes and try again?
Then you would also required to run `find . | xargs stat` on the mountpoint
of the volume. That should trigger the self heal.



 There are no active volume tasks


 I would like to run on the "slave" gluster volume sync nas-01-data
datastore1

BTW, There is no concept of "master" and "slave" in afr (replication).
However there is concept of "master volume" and "slave volume" in gluster
geo-replication.

 But then the virtual machines hosted will be unavailible is there another
way to start the replication ?


 Thanks






_______________________________________________
Gluster-users mailing
listGluster-users at gluster.orghttp://supercolony.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131126/81fdbc09/attachment.html>


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux