These are just a small sample of the virt group settings that you are not using:
performance.strict-o-direct=on
cluster.lookup-optimize=off
performance.quick-read=off
performance.read-ahead=off
performance.io-cache=off
network.remote-dio=disableperformance.read-ahead=off
performance.io-cache=off
performance.strict-o-direct=on
cluster.lookup-optimize=off
Here is an example file https://github.com/gluster/glusterfs/blob/devel/extras/group-virt.example
Also glusterfs-server provides a lot more groups of settings in /var/lib/glusterd/groups directory.
Best Regards,
Strahil Nikolov
В събота, 22 февруари 2025 г. в 14:56:14 ч. Гринуич+2, Gilberto Ferreira <gilberto.nunes32@xxxxxxxxx> написа:
Hi there.
I'd like to know if there are any issues with GlusterFS and NVME.
This week I got two customer where I build 2 Proxmox VE with GlusterFS 11.
I had have created with:
I had have created with:
On both nodes I do:
mkdir /data1
mkdir /data2
mkfs.xfs /dev/nvme1
mkfs.xfs /dev/nvme2
mount /dev/nvme1 /data1
mount /dev/nvme2 /data2
After install glusterfs and do the peer probe, I do
After install glusterfs and do the peer probe, I do
gluster vol create VMS replica 2 gluster1:/data1/vms gluster2:/data1/vms gluster1:/data2/vms gluster2:/data/vms
To solve the split-brain issue, I applied this configurations:
gluster vol set VMS cluster.heal-timeout 5
gluster vol heal VMS enable
gluster vol set VMS cluster.quorum-reads false
gluster vol set VMS cluster.quorum-count 1
gluster vol set VMS network.ping-timeout 2
gluster vol set VMS cluster.favorite-child-policy mtime
gluster vol heal VMS granular-entry-heal enable
gluster vol set VMS cluster.data-self-heal-algorithm full
gluster vol set VMS features.shard on
gluster vol set VMS performance.write-behind off
gluster vol set VMS performance.flush-behind off
gluster vol heal VMS enable
gluster vol set VMS cluster.quorum-reads false
gluster vol set VMS cluster.quorum-count 1
gluster vol set VMS network.ping-timeout 2
gluster vol set VMS cluster.favorite-child-policy mtime
gluster vol heal VMS granular-entry-heal enable
gluster vol set VMS cluster.data-self-heal-algorithm full
gluster vol set VMS features.shard on
gluster vol set VMS performance.write-behind off
gluster vol set VMS performance.flush-behind off
So this configuration allows me to power down the first server and the VMs restart on the secondary server, with no issues at all.
I have the very same scenario in another customer, but there we are working wih SSD DC600M Kingston.
Turns out that in the servers with NVME I got a lot of disk corruption inside the VM.
I have the very same scenario in another customer, but there we are working wih SSD DC600M Kingston.
Turns out that in the servers with NVME I got a lot of disk corruption inside the VM.
If I reboot, things go worse.
Does anybody know any cases about gluster and nvme issues like that?
Does anybody know any cases about gluster and nvme issues like that?
Is there any fix for that?
Thanks
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users
________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users