Hrmm, ok. Shouldn't 'gluster volume delete ...' be smart enough to clean this up so that I don't have to do it manually? Or alternatively, 'gluster volume create ...' should be able to figure out whether the path to a brick is really in use? As things stand now, the process is rather hacky when I have to issue the 'gluster volume delete ...' command, then manually clean up afterwards. Hopefully this is something that will be addressed in a future release? thanks On Tue, Sep 18, 2012 at 11:26 AM, Kaleb Keithley <kkeithle at redhat.com> wrote: > > There are xattrs on the top-level directory of the old brick volume that gluster is detecting causing this. > > I personally always create my bricks on a subdir. If you do that you can simply rmdir/mkdir the directory when you want to delete a gluster volume. > > You can clear the xattrs or "nuke it from orbit" with mkfs on the volume device. > > > ----- Original Message ----- > From: "Lonni J Friedman" <netllama at gmail.com> > To: gluster-users at gluster.org > Sent: Tuesday, September 18, 2012 2:03:35 PM > Subject: cannot create a new volume with a brick that used to be part of a deleted volume? > > Greetings, > I'm running v3.3.0 on Fedora16-x86_64. I used to have a replicated > volume on two bricks. This morning I deleted it successfully: > ######## > [root at farm-ljf0 ~]# gluster volume stop gv0 > Stopping volume will make its data inaccessible. Do you want to > continue? (y/n) y > Stopping volume gv0 has been successful > [root at farm-ljf0 ~]# gluster volume delete gv0 > Deleting volume will erase all information about the volume. Do you > want to continue? (y/n) y > Deleting volume gv0 has been successful > [root at farm-ljf0 ~]# gluster volume info all > No volumes present > ######## > > I then attempted to create a new volume using the same bricks that > used to be part of the (now) deleted volume, but it keeps refusing & > failing claiming that the brick is already part of a volume: > ######## > [root at farm-ljf1 ~]# gluster volume create gv0 rep 2 transport tcp > 10.31.99.165:/mnt/sdb1 10.31.99.166:/mnt/sdb1 > /mnt/sdb1 or a prefix of it is already part of a volume > [root at farm-ljf1 ~]# gluster volume info all > No volumes present > ######## > > Note farm-ljf0 is 10.31.99.165 and farm-ljf1 is 10.31.99.166. I also > tried restarting glusterd (and glusterfsd) hoping that might clear > things up, but it had no impact. > > How can /mnt/sdb1 be part of a volume when there are no volumes present? > Is this a bug, or am I just missing something obvious? >