Thanks. Still new to gluster. Just found a bunch of stuff under /var/lib/glusterd and this webpage
http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/
Cleaned up the bricks according to the URL and I was able to recreate the gluster volumes
[root@onode1 glusterd]# df -k|grep kvm
/dev/sdb1 10475504 32980 10442524 1% /data/glusterfs/kvm1/brick1
/dev/sdc1 10475504 32980 10442524 1% /data/glusterfs/kvm2/brick1
[root@onode2 glusterd]# df -k|grep kvm
/dev/sdb1 10475504 32980 10442524 1% /data/glusterfs/kvm1/brick2
/dev/sdc1 10475504 32980 10442524 1% /data/glusterfs/kvm2/brick2
[root@onode1 glusterd]# gluster volume create kvm1 replica 2 transport tcp onode1:/data/glusterfs/kvm1/brick1 onode2:/data/glusterfs/kvm1/brick2
volume create: kvm1: success: please start the volume to access data
[root@onode1 glusterd]# gluster volume create kvm2 replica 2 transport tcp onode1:/data/glusterfs/kvm2/brick1 onode2:/data/glusterfs/kvm2/brick2
volume create: kvm2: success: please start the volume to access data
[root@onode1 glusterd]# gluster volume start kvm1
volume start: kvm1: success
[root@onode1 glusterd]# gluster volume start kvm2
volume start: kvm2: success
[root@onode1 glusterd]# gluster volume info
Volume Name: kvm1
Type: Replicate
Volume ID: d8f26ca7-23bc-41df-a299-d9567fcebd2e
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: onode1:/data/glusterfs/kvm1/brick1
Brick2: onode2:/data/glusterfs/kvm1/brick2
Volume Name: kvm2
Type: Replicate
Volume ID: 8f3dad35-ae18-4f0f-8cb9-4b722adc20ed
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: onode1:/data/glusterfs/kvm2/brick1
Brick2: onode2:/data/glusterfs/kvm2/brick2
http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/
Cleaned up the bricks according to the URL and I was able to recreate the gluster volumes
[root@onode1 glusterd]# df -k|grep kvm
/dev/sdb1 10475504 32980 10442524 1% /data/glusterfs/kvm1/brick1
/dev/sdc1 10475504 32980 10442524 1% /data/glusterfs/kvm2/brick1
[root@onode2 glusterd]# df -k|grep kvm
/dev/sdb1 10475504 32980 10442524 1% /data/glusterfs/kvm1/brick2
/dev/sdc1 10475504 32980 10442524 1% /data/glusterfs/kvm2/brick2
[root@onode1 glusterd]# gluster volume create kvm1 replica 2 transport tcp onode1:/data/glusterfs/kvm1/brick1 onode2:/data/glusterfs/kvm1/brick2
volume create: kvm1: success: please start the volume to access data
[root@onode1 glusterd]# gluster volume create kvm2 replica 2 transport tcp onode1:/data/glusterfs/kvm2/brick1 onode2:/data/glusterfs/kvm2/brick2
volume create: kvm2: success: please start the volume to access data
[root@onode1 glusterd]# gluster volume start kvm1
volume start: kvm1: success
[root@onode1 glusterd]# gluster volume start kvm2
volume start: kvm2: success
[root@onode1 glusterd]# gluster volume info
Volume Name: kvm1
Type: Replicate
Volume ID: d8f26ca7-23bc-41df-a299-d9567fcebd2e
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: onode1:/data/glusterfs/kvm1/brick1
Brick2: onode2:/data/glusterfs/kvm1/brick2
Volume Name: kvm2
Type: Replicate
Volume ID: 8f3dad35-ae18-4f0f-8cb9-4b722adc20ed
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: onode1:/data/glusterfs/kvm2/brick1
Brick2: onode2:/data/glusterfs/kvm2/brick2
On Friday, December 27, 2013 11:21 AM, BGM <bernhard.glomm@xxxxxxxxxxx> wrote:
with kvm2 u only got onode1 listed as the host?
sure there is no typo nowhere?
next: AFAIK with the path you give rather
kvm1 and kvm2 should be the montpoints for your xfs
while you use the directory brick1/2 to create the gluservolume (or you have to use --force, right?)
next: if volume creation fails, make sure you whip out the paths on BOTH sides
(i.e. unset the attr flags and remove the .glusterfs directory from inside the brick path... though
know what you are doing!)
I found the later normaly solves this error.
Gluster seems to set at least one of the attr on at least one side already and don't roll it back although not the whole operation succeeded.
Unsetting and redoing without typos or other errors (partner daemon reachable? both partner peered with same adress/name?)
hth
Bernhard
May be it is confusing or I'm using the wrong naming conventions. what I'm actually testing isEach node has one brick for each of glusterfs kvm1 and kvm2So glusterfs kvm1 will haveonode1:/data/glusterfs/kvm1/brick1onode2:/data/glusterfs/kvm1/brick2
while kvm2 will have two bricksonode1:/data/glusterfs/kvm2/brick1onode1:/data/glusterfs/kvm2/brick2
On both systems, "brick1" and "brick2" are mount points of xfs file systems
Now create volume kvm1 was successful while kvm2 is not.
volume create: kvm2: failed: /data/glusterfs/kvm2/brick1 or a prefix of it is already part of a volume
Thanks
W
On Thursday, December 26, 2013 5:24 PM, William Kwan <potatok@xxxxxxxxx> wrote:
Hi all,
Running 3.4.1 on Centos6.
I have this issue, I created two xfs filesystem on each of two hosts
onode1:/data/glusterfs/kvm1/brick1onode2:/data/glusterfs/kvm1/brick2onode1:/data/glusterfs/kvm2/brick1onode1:/data/glusterfs/kvm2/brick2The first volume was created successfully# gluster volume create kvm1 replica 2 transport tcp onode1:/data/glusterfs/kvm1/brick1 onode2:/data/glusterfs/kvm1/brick2When I attempt to create the second volume, I got# gluster volume create kvm2 replica 2 transport tcp onode1:/data/glusterfs/kvm2/brick1 onode2:/data/glusterfs/kvm2/brick2
volume create: kvm2: failed: /data/glusterfs/kvm2/brick1 or a prefix of it is already part of a volumeI'm not sure I follow what the error means. The 2nd set of filesystems have to mount under a totally different directory structure?Thanks in advanceWill
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://supercolony.gluster.org/mailman/listinfo/gluster-users