restoring brick on new server failes on glusterfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I am triying to attach a brick from another server to a local gluster development server. Therfore I have done a dd from a snapshot on production and a dd on the lvm volume on development. Then I deleted the .glusterfs folder on root. 

Unfortunatelly forming a new brick failed nevertheless with the info that this brick is already part of a volume. (how does gluster know that?!)

I then issued the following:

sudo setfattr -x trusted.gfid /bricks/staging/brick1/
sudo setfattr -x trusted.glusterfs.volume-id /bricks/staging/brick1/
sudo /etc/init.d/glusterfs-server restart


Magically gluster still seems to know that this brick is from another server as it knows the peered gluster nodes which are aparently different on the dev server:

sudo gluster volume create staging
node1:/bricks/staging/brick1

volume create: staging: failed: Staging failed on gs3. Error: Host node1 is not in 'Peer in Cluster' state

Staging failed on gs2. Error: Host node1 is not in 'Peer in Cluster' state

Is there a way to achive a restorage of that brick on a new server? Thank you for any help on this.

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux