Re: Proper procedure to reduce an active volume

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Fri, 5 Feb 2021, 12:27 Diego Zuccato, <diego.zuccato@xxxxxxxx> wrote:
Il 04/02/21 19:28, Nag Pavan Chilakam ha scritto:

>     What is the proper procedure to reduce a "replica 3 arbiter 1" volume?
> Can you kindly elaborate the volume configuration.  Is this a plain
> arbiter volume or is it a distributed arbiter volume?
> Please share the volume info so that we can help you better
Sure. Here it is. Shortened a bit :)
-8<--
# gluster v info

Volume Name: BigVol
Type: Distributed-Replicate
Volume ID: c51926bd-6715-46b2-8bb3-8c915ec47e28
Status: Started
Snapshot Count: 0
Number of Bricks: 28 x (2 + 1) = 84
Transport-type: tcp
Bricks:
Brick1: str957-biostor2:/srv/bricks/00/BigVol
Brick2: str957-biostor:/srv/bricks/00/BigVol
Brick3: str957-biostq:/srv/arbiters/00/BigVol (arbiter)
Brick4: str957-biostor2:/srv/bricks/01/BigVol
Brick5: str957-biostor:/srv/bricks/01/BigVol
Brick6: str957-biostq:/srv/arbiters/01/BigVol (arbiter)
[...]
Brick79: str957-biostor:/srv/bricks/26/BigVol
Brick80: str957-biostor2:/srv/bricks/26/BigVol
Brick81: str957-biostq:/srv/arbiters/26/BigVol (arbiter)
Brick82: str957-biostor:/srv/bricks/27/BigVol
Brick83: str957-biostor2:/srv/bricks/27/BigVol
Brick84: str957-biostq:/srv/arbiters/27/BigVol (arbiter)
Options Reconfigured:
features.scrub-throttle: aggressive
server.manage-gids: on
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
cluster.self-heal-daemon: enable
ssl.certificate-depth: 1
auth.ssl-allow: str957-bio*
features.scrub-freq: biweekly
features.scrub: Active
features.bitrot: on
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on
client.ssl: on
server.ssl: on
server.event-threads: 8
client.event-threads: 8
cluster.granular-entry-heal: enable
-8<--
Ok, in this case it is a 28 x volume, you will have to remove each replica set (one dat subvolume) at one go. Eg
Gluster v remove volname  b1 b2 b3 start

>     The procedure I've found is:
>     1) # gluster volume remove-brick VOLNAME BRICK start
>     (repeat for each brick to be removed, but being a r3a1 should I remove
>     both bricks and the arbiter in a single command or multiple ones?)
Yes in one single command, you need to mention the whole replica set. In your case b1,b2,b3  b4,b5,b6 so on .
You can either mention multiple replica sets together or 1 set at a time
> No , you can mention bricks of a distributed subvolume in one command.
> If you are having a 1x(2+1a) volume , then you should mention only one
> brick. Start by removing the arbiter brick
Ok.

>     2) # gluster volume remove-brick VOLNAME BRICK status
>     (to monitor migration)
>     3) # gluster volume remove-brick VOLNAME BRICK commit
>     (to finalize the removal)
>     4) umount and reformat the freed (now unused) bricks
>     Is this safe?
> What is the actual need to remove bricks?
I need to move a couple of disks to a new server, to keep it all well
balanced and increase the available space.

> If you feel this volume is not needed anymore , then just delete the
> volume, instead of going through each brick deletion
Nono, the volume is needed and is currently hosting data I cannot
lose... But I haven't space to copy it elsewhere...

--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux