The arbiter can help in the second scenario from https://docs.gluster.org/en/main/Administrator-Guide/Split-brain-and-ways-to-deal-with-it/ .
Best Regards,
Strahil Nikolov
Best Regards,
Strahil Nikolov
В понеделник, 21 октомври 2024 г. в 14:40:24 ч. Гринуич+3, Gilberto Ferreira <gilberto.nunes32@xxxxxxxxx> написа:
Ok! I got it about how many disks I can lose and so on.
But regard the arbiter isse, I always set this parameters in the gluster volume, in order to avoid split-brain and I might add that work pretty well to me.
I already have a Proxmox VE cluster with 2 nodes and about 50 vms, running different Linux distro - and Windows as well - with Cpanel and other stuff, in production.
I already have a Proxmox VE cluster with 2 nodes and about 50 vms, running different Linux distro - and Windows as well - with Cpanel and other stuff, in production.
Anyway here the parameters I had have used:
gluster vol set VMS cluster.heal-timeout 5
gluster vol heal VMS enable
gluster vol set VMS cluster.quorum-reads false
gluster vol set VMS cluster.quorum-count 1
gluster vol set VMS network.ping-timeout 2
gluster vol set VMS cluster.favorite-child-policy mtime
gluster vol heal VMS granular-entry-heal enable
gluster vol set VMS cluster.data-self-heal-algorithm full
gluster vol set VMS features.shard on
gluster vol set VMS performance.write-behind off
gluster vol set VMS performance.flush-behind off
gluster vol heal VMS enable
gluster vol set VMS cluster.quorum-reads false
gluster vol set VMS cluster.quorum-count 1
gluster vol set VMS network.ping-timeout 2
gluster vol set VMS cluster.favorite-child-policy mtime
gluster vol heal VMS granular-entry-heal enable
gluster vol set VMS cluster.data-self-heal-algorithm full
gluster vol set VMS features.shard on
gluster vol set VMS performance.write-behind off
gluster vol set VMS performance.flush-behind off
---
Gilberto Nunes Ferreira
Em dom., 20 de out. de 2024 às 17:34, Strahil Nikolov <hunter86_bg@xxxxxxxxx> escreveu:
If it's replica 2, you can loose up to 1 replica per distribution group.For example, if you have a volume TEST with such setup:
server1:/brick1
server2:/brick1
server1:/brick2
server2:/brick2
You can loose any brick of the replica "/brick1" and any brick in the replica "/brick2". So if you loose server1:/brick1 and server2:/brick2 -> no data loss will be experienced.
As usual, consider if you can add an arbiter for your volumes.
Best Regards,
Strahil NikolovВ събота, 19 октомври 2024 г. в 18:32:40 ч. Гринуич+3, Gilberto Ferreira <gilberto.nunes32@xxxxxxxxx> написа:________Hi there.I have 2 servers with this number of disks in each side:pve01:~# df | grep disco
/dev/sdd 1.0T 9.4G 1015G 1% /disco1TB-0
/dev/sdh 1.0T 9.3G 1015G 1% /disco1TB-3
/dev/sde 1.0T 9.5G 1015G 1% /disco1TB-1
/dev/sdf 1.0T 9.4G 1015G 1% /disco1TB-2
/dev/sdg 2.0T 19G 2.0T 1% /disco2TB-1
/dev/sdc 2.0T 19G 2.0T 1% /disco2TB-0
/dev/sdj 1.0T 9.2G 1015G 1% /disco1TB-4I have a Type: Distributed-Replicate glusterSo my question is: how much disk can be in fail state after losing data or something?Thanks in advance---Gilberto Nunes Ferreira
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users
________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users