Re: Geo-replication status Faulty

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It could be a "simple" bug - software has bugs and regressions.

I would recommend you to ping the debian mailing list - at least it won't hurt.

Best Regards,
Strahil Nikolov






В вторник, 27 октомври 2020 г., 20:10:39 Гринуич+2, Gilberto Nunes <gilberto.nunes32@xxxxxxxxx> написа: 





[SOLVED]

Well... It seems to me that pure Debian Linux 10 has some problem with XFS, which is the FS that  I used.
It's not accept attr2 mount options.

Interestingly enough, I have now used Proxmox 6.x, which is Debian based, I am now able to use the attr2 mount point option.
Then the Faulty status of geo-rep has gone.
Perhaps Proxmox staff has compiled xfs from scratch... Don't know....
But now I am happy ' cause the main reason to use geo-rep to me is to use it over Proxmox....

cat /etc/fstab  # <file system> <mount point> <type> <options> <dump> <pass> /dev/pve/root / xfs defaults 0 1 /dev/pve/swap none swap sw 0 0 /dev/sdb1       /DATA   xfs     attr2   0       0 gluster01:VMS /vms glusterfs defaults,_netdev,x-systemd.automount,backupvolfile-server=gluster02 0 0 proc /proc proc defaults 0 0


---
Gilberto Nunes Ferreira







Em ter., 27 de out. de 2020 às 09:39, Gilberto Nunes <gilberto.nunes32@xxxxxxxxx> escreveu:
>>> IIUC you're begging for split-brain ...
> Not at all!
> I have used this configuration and there isn't any split brain at all!
> But if I do not use it, then I get a split brain.
> Regarding count 2 I will see it!
> Thanks
> 
> ---
> Gilberto Nunes Ferreira
> 
> 
> 
> 
> 
> Em ter., 27 de out. de 2020 às 09:37, Diego Zuccato <diego.zuccato@xxxxxxxx> escreveu:
>> Il 27/10/20 13:15, Gilberto Nunes ha scritto:
>>> I have applied this parameters to the 2-node gluster:
>>> gluster vol set VMS cluster.heal-timeout 10
>>> gluster volume heal VMS enable
>>> gluster vol set VMS cluster.quorum-reads false
>>> gluster vol set VMS cluster.quorum-count 1
>> Urgh!
>> IIUC you're begging for split-brain ...
>> I think you should leave quorum-count=2 for safe writes. If a node is
>> down, obviously the volume becomes readonly. But if you planned the
>> downtime you can reduce quorum-count just before shutting it down.
>> You'll have to bring it back to 2 before re-enabling the downed server,
>> then wait for heal to complete before being able to down the second server.
>> 
>>> Then I mount the gluster volume putting this line in the fstab file:
>>> In gluster01
>>> gluster01:VMS /vms glusterfs
>>> defaults,_netdev,x-systemd.automount,backupvolfile-server=gluster02 0 0
>>> In gluster02
>>> gluster02:VMS /vms glusterfs
>>> defaults,_netdev,x-systemd.automount,backupvolfile-server=gluster01 0 0
>> Isn't it preferrable to use the 'hostlist' syntax?
>> gluster01,gluster02:VMS /vms glusterfs defaults,_netdev 0 0
>> A / at the beginning is optional, but can be useful if you're trying to
>> use the diamond freespace collector (w/o the initial slash, it ignores
>> glusterfs mountpoints).
>> 
>> -- 
>> Diego Zuccato
>> DIFA - Dip. di Fisica e Astronomia
>> Servizi Informatici
>> Alma Mater Studiorum - Università di Bologna
>> V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
>> tel.: +39 051 20 95786
>> 
> 
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux