Gluster 11 and NVME

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi there.

I'd like to know if there are any issues with GlusterFS and NVME.
This week I got two customer where I build 2 Proxmox VE with GlusterFS 11.
I had have created with:
On both nodes I do:

mkdir /data1
mkdir /data2
mkfs.xfs /dev/nvme1
mkfs.xfs /dev/nvme2
mount /dev/nvme1 /data1
mount /dev/nvme2 /data2
After install glusterfs and do the peer probe, I do
gluster vol create VMS replica 2 gluster1:/data1/vms gluster2:/data1/vms gluster1:/data2/vms gluster2:/data/vms

To solve the split-brain issue, I applied this configurations:
gluster vol set VMS cluster.heal-timeout 5
gluster vol heal VMS enable
gluster vol set VMS cluster.quorum-reads false
gluster vol set VMS cluster.quorum-count 1
gluster vol set VMS network.ping-timeout 2
gluster vol set VMS cluster.favorite-child-policy mtime
gluster vol heal VMS granular-entry-heal enable
gluster vol set VMS cluster.data-self-heal-algorithm full
gluster vol set VMS features.shard on
gluster vol set VMS performance.write-behind off
gluster vol set VMS performance.flush-behind off

So this configuration allows me to power down the first server and the VMs restart on the secondary server, with no issues at all.
I have the very same scenario in another customer, but there we are working wih SSD DC600M Kingston.

Turns out that in the servers with NVME I got a lot of disk corruption inside the VM.
If I reboot, things go worse.

Does anybody know any cases about gluster and nvme issues like that?
Is there any fix for that?

Thanks


---


Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram




________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux