remove-brick: sanity check

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Gluster peeps

Just sanity checking the procedure for removing bricks ...

We're on v3.2.7, with four nodes (g1, g2, g3, g4), three bricks on
each node. The first two bricks across all nodes form a replicated
filesystem (gv1), the third brick a distributed filesystem (gv2).

The plan is to bring down the usage on the filesystems to below 50%,
remove the bricks from nodes three and four, and when the rebalance is
complete, remove nodes three and four from the cluster.


I've been reading the docs, and all seems to make sense. But I have
some questions:

1. For replicated volumes, removing bricks _should_ be fine. ?
2. For distributed volumes, how do I make sure that data is moved to
the bricks that I'm not going to remove?

Any pointers appreciated.

Thanks.

-- 
Pete Smith
DevOp/System Administrator
Realise Studio
12/13 Poland Street, London W1F 8QB
T. +44 (0)20 7165 9644

realisestudio.com


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux