Issue recreating volumes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Brian,
     The first point(1) is working as it is intended. Allowing something like that can get the volume into very complicated state.
Please go through the following bug:
https://bugzilla.redhat.com/show_bug.cgi?id=812214

Pranith

----- Original Message -----
From: "Brian Candler" <B.Candler at pobox.com>
To: gluster-users at gluster.org
Sent: Thursday, June 7, 2012 8:57:16 PM
Subject: Issue recreating volumes

Here are a couple of wrinkles I have come across while trying gluster 3.3.0
under ubuntu-12.04.

(1) At one point I decided to delete some volumes and recreate them. But
it would not let me recreate them:

    root at dev-storage2:~# gluster volume create fast dev-storage1:/disk/storage1/fast dev-storage2:/disk/storage2/fast
    /disk/storage2/fast or a prefix of it is already part of a volume

This is even though "gluster volume info" showed no volumes.

Restarting glusterd didn't help either. Nor indeed did a complete reinstall
of glusterfs, even with apt-get remove --purge and rm -rf'ing the state
directories.

Digging around, I found some hidden state files:

    # ls -l /disk/storage1/*/.glusterfs/00/00
    /disk/storage1/fast/.glusterfs/00/00:
    total 0
    lrwxrwxrwx 1 root root 8 Jun  7 14:23 00000000-0000-0000-0000-000000000001 -> ../../..

    /disk/storage1/safe/.glusterfs/00/00:
    total 0
    lrwxrwxrwx 1 root root 8 Jun  7 14:21 00000000-0000-0000-0000-000000000001 -> ../../..

I deleted them on both machines:

    rm -rf /disk/*/.glusterfs

Problem solved? No, not even with glusterd restart :-(

    root at dev-storage2:~# gluster volume create safe replica 2 dev-storage1:/disk/storage1/safe dev-storage2:/disk/storage2/safe
    /disk/storage2/safe or a prefix of it is already part of a volume

In the end, what I needed was to delete the actual data bricks themselves:

    rm -rf /disk/*/fast
    rm -rf /disk/*/safe

That allowed me to recreate the volumes.

This is probably an understanding/documentation issue. I'm sure there's a
lot of magic going on in the gluster 3.3 internals (is that long ID some
sort of replica update sequence number?) which if it were fully documented
would make it easier to recover from these situations.


(2) Minor point: the FUSE client no longer seems to understand or need the
"_netdev" option, however it still invokes it if you use "defaults" in
/etc/fstab, and so you get a warning about an unknown option:

    root at dev-storage1:~# grep gluster /etc/fstab
    storage1:/safe /gluster/safe glusterfs defaults,nobootwait 0 0
    storage1:/fast /gluster/fast glusterfs defaults,nobootwait 0 0

    root at dev-storage1:~# mount /gluster/safe
    unknown option _netdev (ignored)

Regards,

Brian.
_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux