You probably want to blow away your brick filesystem and start clean - There will be xattr information that is confusing Gluster. Best practice is to use DNS to support peers, rather than IP addresses. On 6/1/12 12:27 AM, ???????? ????????? ?????????? wrote: > > Hello! > > I fired up gluster v.3.2.5 with this: > > gluster peer probe 10.0.1.131 > > gluster volume create vms replica 2 transport tcp 10.0.1.130:/mnt/ld0 > 10.0.1.131:/mnt/ld3 10.0.1.130:/mnt/ld1 10.0.1.131:/mnt/ld4 > 10.0.1.130:/mnt/ld2 10.0.1.131:/mnt/ld5 > > gluster volume start vms > > mkdir /mnt/gluster > > added in /etc/fstab > > 127.0.0.1:/vms /mnt/gluster glusterfs defaults,_netdev 0 0 > > mount /mnt/gluster > > everything was awesome > > than for some reason I had to change IPs of my servers > > from > > 10.0.1.130 -> 10.0.1.50 > > 10.0.1.131 -> 10.0.1.51 > > I?ve decided to > > stop my volume, > > delete it, > > stop glusterd, > > erase gluster software (and all configs) with yum (Centos 6.2 x86_64) > > and then recreate that volume again with: > > gluster peer probe 10.0.1.51 > > gluster volume create vms replica 2 transport tcp 10.0.1.50:/mnt/ld0 > 10.0.1.51:/mnt/ld3 10.0.1.50:/mnt/ld1 10.0.1.51:/mnt/ld4 > 10.0.1.50:/mnt/ld2 10.0.1.51:/mnt/ld5 > > but it said: > > Operation failed > > I googled for awhile and found out that new 3.3.0 has arrived, so I > updated to it and start whole thing over again > > Now It says: > > /mnt/ld0 or a prefix of it is already part of a volume > > Any help ? > > Thanks in advance > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://gluster.org/pipermail/gluster-users/attachments/20120601/eb8ccdd9/attachment.htm>