After force the add-brick
gluster volume add-brick VMS replica 3 arbiter 1 arbiter:/arbiter1 arbiter:/arbiter2 arbiter:/arbiter3 force
volume add-brick: success
pve01:~# gluster volume info
Volume Name: VMS
Type: Distributed-Replicate
Volume ID: e1a4f787-3f62-441e-a7ce-c0ae6b111ebf
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (2 + 1) = 9
Transport-type: tcp
Bricks:
Brick1: gluster1:/disco2TB-0/vms
Brick2: gluster2:/disco2TB-0/vms
Brick3: arbiter:/arbiter1 (arbiter)
Brick4: gluster1:/disco1TB-0/vms
Brick5: gluster2:/disco1TB-0/vms
Brick6: arbiter:/arbiter2 (arbiter)
Brick7: gluster1:/disco1TB-1/vms
Brick8: gluster2:/disco1TB-1/vms
Brick9: arbiter:/arbiter3 (arbiter)
Options Reconfigured:
cluster.self-heal-daemon: off
cluster.entry-self-heal: off
cluster.metadata-self-heal: off
cluster.data-self-heal: off
cluster.granular-entry-heal: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
performance.client-io-threads: off
pve01:~#
volume add-brick: success
pve01:~# gluster volume info
Volume Name: VMS
Type: Distributed-Replicate
Volume ID: e1a4f787-3f62-441e-a7ce-c0ae6b111ebf
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (2 + 1) = 9
Transport-type: tcp
Bricks:
Brick1: gluster1:/disco2TB-0/vms
Brick2: gluster2:/disco2TB-0/vms
Brick3: arbiter:/arbiter1 (arbiter)
Brick4: gluster1:/disco1TB-0/vms
Brick5: gluster2:/disco1TB-0/vms
Brick6: arbiter:/arbiter2 (arbiter)
Brick7: gluster1:/disco1TB-1/vms
Brick8: gluster2:/disco1TB-1/vms
Brick9: arbiter:/arbiter3 (arbiter)
Options Reconfigured:
cluster.self-heal-daemon: off
cluster.entry-self-heal: off
cluster.metadata-self-heal: off
cluster.data-self-heal: off
cluster.granular-entry-heal: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
performance.client-io-threads: off
pve01:~#
---
Gilberto Nunes Ferreira
Em sex., 8 de nov. de 2024 às 06:38, Strahil Nikolov <hunter86_bg@xxxxxxxxx> escreveu:
What's the volume structure right now?Best Regards,
Strahil Nikolov
On Wed, Nov 6, 2024 at 18:24, Gilberto Ferreira<gilberto.nunes32@xxxxxxxxx> wrote:So I went ahead and do the force (is with you!)gluster volume add-brick VMS replica 3 arbiter 1 arbiter:/arbiter1 arbiter:/arbiter2 arbiter:/arbiter3
volume add-brick: failed: Multiple bricks of a replicate volume are present on the same server. This setup is not optimal. Bricks should be on different nodes to have best fault tolerant co
nfiguration. Use 'force' at the end of the command if you want to override this behavior.
pve01:~# gluster volume add-brick VMS replica 3 arbiter 1 arbiter:/arbiter1 arbiter:/arbiter2 arbiter:/arbiter3 forcevolume add-brick: successBut I don't know if this is the right thing to do.
---Gilberto Nunes Ferreira(47) 99676-7530 - Whatsapp / Telegram
Em qua., 6 de nov. de 2024 às 13:10, Gilberto Ferreira <gilberto.nunes32@xxxxxxxxx> escreveu:But if I change replica 2 arbiter 1 to replica 3 arbiter 1gluster volume add-brick VMS replica 3 arbiter 1 arbiter:/arbiter1 arbiter:/arbiter2 arbiter:/arbiter3I got thir error:
volume add-brick: failed: Multiple bricks of a replicate volume are present on the same server. This setup is not optimal. Bricks should be on different nodes to have best fault tolerant configuration. Use 'force' at the end of the command if you want to override this behavior.Should I maybe add the force and live with this?
---Gilberto Nunes Ferreira
Em qua., 6 de nov. de 2024 às 12:53, Gilberto Ferreira <gilberto.nunes32@xxxxxxxxx> escreveu:Ok.I have a 3rd host with Debian 12 installed and Gluster v11. The name of the host is arbiter!I already add this host into the pool:arbiter:~# gluster pool list
UUID Hostname State
0cbbfc27-3876-400a-ac1d-2d73e72a4bfd gluster1.home.local Connected
99ed1f1e-7169-4da8-b630-a712a5b71ccd gluster2 Connected
4718ead7-aebd-4b8b-a401-f9e8b0acfeb1 localhost ConnectedBut when I do this:pve01:~# gluster volume add-brick VMS replica 2 arbiter 1 arbiter:/arbiter1 arbiter:/arbiter2 arbiter:/arbiter3I got this error:
For arbiter configuration, replica count must be 3 and arbiter count must be 1. The 3rd brick of the replica will be the arbiter
Usage:
volume add-brick <VOLNAME> [<replica> <COUNT> [arbiter <COUNT>]] <NEW-BRICK> ... [force]gluster vol infopve01:~# gluster vol info
Volume Name: VMS
Type: Distributed-Replicate
Volume ID: e1a4f787-3f62-441e-a7ce-c0ae6b111ebf
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: gluster1:/disco2TB-0/vms
Brick2: gluster2:/disco2TB-0/vms
Brick3: gluster1:/disco1TB-0/vms
Brick4: gluster2:/disco1TB-0/vms
Brick5: gluster1:/disco1TB-1/vms
Brick6: gluster2:/disco1TB-1/vms
Options Reconfigured:
performance.client-io-threads: off
transport.address-family: inet
storage.fips-mode-rchecksum: on
cluster.granular-entry-heal: on
cluster.data-self-heal: off
cluster.metadata-self-heal: off
cluster.entry-self-heal: off
cluster.self-heal-daemon: off
What am I doing wrong?
---Gilberto Nunes Ferreira(47) 99676-7530 - Whatsapp / Telegram
Em qua., 6 de nov. de 2024 às 11:32, Strahil Nikolov <hunter86_bg@xxxxxxxxx> escreveu:Right now you have 3 "sets" of replica 2 on 2 hosts.In your case you don't need so much space for arbiters (10-15GB with 95 maxpct is enough for each "set") and you need a 3rd system or when the node that holds the data brick + arbiter brick fails (2 node scenario) - that "set" will be unavailable.If you do have a 3rd host, I think the command would be:gluster volume add-brick VOLUME replica 2 arbiter 1 server3:/first/set/arbiter server3:/second/set/arbiter server3:/last/set/arbiterBest Regards,Strahil NikolovBest Regards,
Strahil Nikolov
On Tue, Nov 5, 2024 at 21:17, Gilberto Ferreira<gilberto.nunes32@xxxxxxxxx> wrote:________
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users
________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users