Re: Replica bricks fungible?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> it will require quite a lot of time to *rebalance*...

(my emphasis on "rebalance"). Just to avoid any misunderstandings,
I am talking about pure replica. No distributed replica and no
arbitrated replica. I guess that moving bricks would also work
on a distributed replica within, but not outside, each replica,
but that's only a guess.

> Have you documented the procedure you followed?

I did several different things. I moved a brick from one path
to another on the same server, and I also moved a brick from
one server to another. The procedure in both cases is the same.

# gluster volume heal gv0 statistics heal-count

If all heal count "number of entries" are 0,

# ssh root@{node01,node02,node03} "systemctl stop glusterd"

(This is to prevent any writing to any node while copy/move
operations are ongoing. It's not necessary if you have umounted
all the clients.)

# ssh root@node04
# rsync -vvaz --progress node01:/gfsroot/gv0 /gfsroot/

node04 in the above example is the new node. It could also be
a new brick on an existing node, like

# mount /dev/sdnewdisk1 /gfsnewroot
# rsync -vva --progress /gfsroot/gv0 /gfsnewroot/

Once you have a full copy of the old brick on the new location,
you can just

# ssh root@{node01,node02,node03,node04} "systemctl start glusterd"
# gluster volume add-brick gv0 replica 4 node04:/gfsroot
# gluster vol status
# gluster volume remove-brick gv0 replica 3 node01:/gfsroot

In this example I use add-brick first, before remove-brick, so
as to avoid the theoretical risk of split-brain of a 3-brick
volume if it is momentarily left with only two bricks. In real
life you will either have many more bricks than three, or you
will have kicked out all clients before this procedure, so the
order of add and remove won't matter.

________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux