Re: Replace dead nodes/bricks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



A few things are not clear to me.  Comments inline below.

On 5/15/2014 5:19 AM, Lyle Plaatjes wrote:
I've just started at a new company and I came across this problem. They have webservers peering using gluster 3.2.7.

I take it that the gluster storage is on the same machines as the web-server software?

Was this a replica-4 setup, where there is a replicated brick on each of 4 nodes?

The problem comes in where they upgraded the VM's to newer versions of Ubuntu. They didn't replace the bricks before decommissioning the other nodes. Only one node is actually still running so that is 1 brick that actually exists and is being replicated to the new hosts.

So there really is no true replication going on, just all files being served from the one gluster brick that still works?  (And if that brick dies, the whole site disappears until restored from backup {if there is a backup})

Now when one of the hosts is rebooted then gluster doesn't mount the volume because it's looking at the 3 dead peers and one that is still fine.

Are your new nodes (without bricks) peers in the gluster cluster?
Are your mounts of the form localhost:<volume-name> ?

What I need to do is replace the old dead peers with the new ones so that the gluster volume will actually mount if a host is rebooted.
Based on my guesses as to what your setup is, I think this is what I would do.
  • Get all web servers operating as peers in the trusted pool
    • It is not clear whether the new web servers even have gluster installed
  • change /etc/fstab so that mounts are of the form localhost:<volume-name>
    • so that it doesn't matter what other node is up or down, as long as the volume is active
    • I don't' know what exact commands Ubuntu is using, but in Centos 6 I use the "nofail" option in the fourth column of /etc/fstab (where 'default' is a common entry).
      • This allows the bootup to proceed, even though the volume may not be mountable yet.
        • During (Centos) bootup, mount gets a second (or third) chance to mount things
    • make sure that the last two columns in /etc/fstab are "0 0"
      • so that it doesn't try to do a filesystem check during bootup
  • set up bricks on new servers
    • if the machine names are the same as the old machines, use a different path from the old brick path
      • to see old brick path, run "gluster volume info <volume-name>"
    • put brick in /etc/fstab, so it gets mounted
  • run "gluster volume add-brick <volume-name> replica <current replica number + 1> <hostname>:/<brick-path>" on each node
    • this adds the new bricks to the volume
    • you may need to wait until one brick has healed (synchronized) before adding the next brick
      • even synching one brick can saturate a network link, and bring things to their knees
    • repeat until you have 4 bricks active
  • run "gluster volume remove-brick <volume-name> replica <old replica number -1> <hostname>:/<brick-path>" to remove "historic" bricks
    • don't do the remove bricks before adding any bricks
      • taking a replicated volume down to one brick and then trying to bring it back to two bricks can be problematic on some (maybe all) versions of gluster

Remember, I am assuming:

  • you have 4 web servers that should also all be gluster brick nodes
  • you were running a replica 4 configuration

Ted Miller
Elkhart, IN, USA

I am not an official part of gluster, just another user who has added and removed bricks a few times.

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux