Il 2021-08-05 06:00 Strahil Nikolov ha scritto:
I'm not so sure. Imagine that local copy needs healing (outdated). Then gluster will check if other node's copy is blaming the local one and if it's "GREEN" , it will read locally. This check to the other servers is the slowest part due to the lattency between the nodes.
Sure, I was thinking about the "all replicas contain correct data" scenario. When a node is degraded/outdated, all bets are off.
I guess the only way is to use the FUSE client mount options and manually change the source brick.
This should not be enough in case of degraded/failed/outdated node: from my understanding, the FUSE client only points to a specific server to have some information about the brick/server layout, re-connecting to the various brick as needed for the actual data transfer.
Another option that comes to my mind is pacemaker with a IPaddr2 reaource and the option globally-unique=true. If done properly, pacemaker will bring the IP on all nodes, but using IPTABLES (manipulated automatically by the cluster) only 1 node will be active at a time with a preference to the fastest node. Then the FUSE client can safely be configured to use that VIP, which in case of failure (of the fast node), will be moved to another node of the Gluster TSP. Yet, this will be a very complex design
Yeah, quite complex and fragile... I would not debug such a scenario when the cluster manager fails to setup the correct rules ;)
Thanks. -- Danti Gionatan Supporto Tecnico Assyoma S.r.l. - www.assyoma.it email: g.danti@xxxxxxxxxx - info@xxxxxxxxxx GPG public key ID: FF5F32A8 ________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users