RE: Decomission a brick

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks, the concept makes sense. I would be using the unify scenario you gave, but it requires
effectively a downtime while the brick is rsynced back into the /mnt/glusterfs. It does take quite a bit
of time to get large quantites of data transferred back into the active file system. 

I would like to see a feature where I can still keep the brick active, yet all *new* file accesses or creations would not use the brick that was planned for decomission. This way as the regularly used files are then moved off to other bricks by normal use, and over time there would be less to transfer from the decomissioning brick. Isn't there some way to do this in the spec file so that no more files are written to the brick?

I was thinking something more like 
1) set available disk size to zero for the brick
2) find /mnt/glusterfs (this would move the files out of the full or decomissioned brick)
3) then clean up the spec file some more, and remount the glusterfs
This scenario has much less time that the files are unavailable to the users. I do not know if all these features are available in the current product.


Date: Fri, 14 Dec 2007 15:40:12 +0530
From: avati@xxxxxxxxxxxxx
To: deedee6905@xxxxxxxxxxx
Subject: Re: Decomission a brick
CC: gluster-devel@xxxxxxxxxx

DeeDee,
 decomissioning a brick is pretty much manual as of today. there are three cases in which you are decomissinong - AFR, unify, stripe. I have explained how I would decomission them. It is essential to understand the concept rather than following the steps below as a 'tutorial', and find your own (possibly much easier) procedure.


though the steps sound complex, if you look beyond the procedure, the idea behind them is easy to understand.

the hot add/remove feature in the oncoming (1.4) release will do all this for you automagically, this manual procedure is for 
1.3.

 If it is a part of an AFR subvolume, just plug it out, trim your spec file, and you are all set.

If it is a part of a unify subvolume,
 - remove the subvolume from the spec file, 
 - flush your namespace cache, 

 - remount your fs,
 - run a find /mnt/glusterfs for it to rebuild the ns cache (this time without the contents of the brick to be removed since it is no longer in the subvolume list)
 - rsync the export directory of the decomissioning brick onto the mount point to merge back the contents.


If it is part of a stripe.
 the simplest is to move out just the striped files, re-do the config, and write back the striped files freshly.
 another way is
 - declare a new stripe set, with a new export directory on the server, excluding the decomissioning node.

 - load a unify on top of the old and new stripe, with 'switch' scheduler and configure a directory to be forced onto the new stripe set. 
 - copy within glusterfs, the old stripe files into the new directory (mv/rename will not do) now all the striped files are restriped onto the new volume

 - delete the old stripe files, essentially emptying the old stripe export directories.
 - preserve only the first subvolume of the old stripe, and make that the direct subvolume of the just-created unify.
 - get away with the old stripe volume.

 - done.

so.. that's the idea. do ask any questions if something is not clear.

avati

2007/12/8, DeeDee Park <deedee6905@xxxxxxxxxxx
>:
What is the procedure for removing a brick from operation without loosing any of the data while still

providing an operational file system for the other users?


_________________________________________________________________
Share life as it happens with the new Windows Live.Download today it's FREE!

http://www.windowslive.com/share.html?ocid=TXT_TAGLM_Wave2_sharelife_112007_______________________________________________

Gluster-devel mailing list
Gluster-devel@xxxxxxxxxx
http://lists.nongnu.org/mailman/listinfo/gluster-devel



-- 
If I traveled to the end of the rainbow
As Dame Fortune did intend,
Murphy would be there to tell me
The pot's at the other end.

_________________________________________________________________
Share life as it happens with the new Windows Live.
http://www.windowslive.com/share.html?ocid=TXT_TAGHM_Wave2_sharelife_122007

[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux