Il 27/10/20 13:15, Gilberto Nunes ha scritto: > I have applied this parameters to the 2-node gluster: > gluster vol set VMS cluster.heal-timeout 10 > gluster volume heal VMS enable > gluster vol set VMS cluster.quorum-reads false > gluster vol set VMS cluster.quorum-count 1 Urgh! IIUC you're begging for split-brain ... I think you should leave quorum-count=2 for safe writes. If a node is down, obviously the volume becomes readonly. But if you planned the downtime you can reduce quorum-count just before shutting it down. You'll have to bring it back to 2 before re-enabling the downed server, then wait for heal to complete before being able to down the second server. > Then I mount the gluster volume putting this line in the fstab file: > In gluster01 > gluster01:VMS /vms glusterfs > defaults,_netdev,x-systemd.automount,backupvolfile-server=gluster02 0 0 > In gluster02 > gluster02:VMS /vms glusterfs > defaults,_netdev,x-systemd.automount,backupvolfile-server=gluster01 0 0 Isn't it preferrable to use the 'hostlist' syntax? gluster01,gluster02:VMS /vms glusterfs defaults,_netdev 0 0 A / at the beginning is optional, but can be useful if you're trying to use the diamond freespace collector (w/o the initial slash, it ignores glusterfs mountpoints). -- Diego Zuccato DIFA - Dip. di Fisica e Astronomia Servizi Informatici Alma Mater Studiorum - Università di Bologna V.le Berti-Pichat 6/2 - 40127 Bologna - Italy tel.: +39 051 20 95786 ________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://bluejeans.com/441850968 Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users