I guess the only way is to use the FUSE client mount options and manually change the source brick.
Another option that comes to my mind is pacemaker with a IPaddr2 reaource and the option globally-unique=true. If done properly, pacemaker will bring the IP on all nodes, but using IPTABLES (manipulated automatically by the cluster) only 1 node will be active at a time with a preference to the fastest node.
Then the FUSE client can safely be configured to use that VIP, which in case of failure (of the fast node), will be moved to another node of the Gluster TSP.
Yet, this will be a very complex design.
Best Regards,
Strahil Nikolov
On Wed, Aug 4, 2021 at 22:28, Gionatan Danti<g.danti@xxxxxxxxxx> wrote:Il 2021-08-03 19:51 Strahil Nikolov ha scritto:
> The difference between thin and usual arbiter is that the thin arbiter
> takes in action only when it's needed (one of the data bricks is down)
> , so the thin arbiter's lattency won't affect you as long as both data
> bricks are running.
>
> Keep in mind that thin arbiter is less used. For example, I have never
> deployed a thin arbiter.
Maybe I am horribly wrong, but local-node reads should *not* involve
other nodes in any manner - ie: no checksum or voting is done for read.
AFR hashing should spread different files to different nodes when doing
striping, but for mirroring any node should have a valid copy of the
requested data.
So when using choose-local all reads which can really be local (ie: the
requested file is available) should not suffer from remote party
latency.
Is that correct?
Thanks.
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti@xxxxxxxxxx - info@xxxxxxxxxx
GPG public key ID: FF5F32A8
________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users