Re: Read from fastest node only

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I think you mean cluster.choose-local which is enabled by default. Yet, Gluster will check if the local copy is healthy.

Best Regards,
Strahil Nikolov






В сряда, 28 юли 2021 г., 10:20:55 ч. Гринуич+3, Yaniv Kaul <ykaul@xxxxxxxxxx> написа: 







On Wed, Jul 28, 2021 at 5:50 AM David Cunningham <dcunningham@xxxxxxxxxxxxx> wrote:
> Hi Yaniv,
> 
> It may be my lack of knowledge, but I can't see how the fastest response time could differ from file to file. If that's true then it would be enough to only periodically test which node is fastest for this client, not having to do it for every single read.

In real life, the 'best' node is the one with the highest overall free resources, across CPU, network and disk IO. So it could change and it might change all the time.
Network, disk saturation might be common, disk performing garbage collection, CPU being hogged by something, noisy neighbor, etc...

Our latency check is indeed not per file, AFAIK.
Y.

> 
> Thanks for the tip about read-hash-mode. I see the help is as below. Value 4 may help, but not if the latency is tested for every file read. Value 0 may help, but it depends how the children are ordered. Does anyone know more about how these work?
> 
> Option: cluster.read-hash-mode
> Default Value: 1
> Description: inode-read fops happen only on one of the bricks in replicate. AFR will prefer the one computed using the method specified using this option.
> 0 = first readable child of AFR, starting from 1st child.
> 1 = hash by GFID of file (all clients use same subvolume).
> 2 = hash by GFID of file and client PID.
> 3 = brick having the least outstanding read requests.
> 4 = brick having the least network ping latency.
> 
> Thanks again.
> 
> 
> On Tue, 27 Jul 2021 at 19:16, Yaniv Kaul <ykaul@xxxxxxxxxx> wrote:
>> 
>> 
>> On Tue, Jul 27, 2021 at 9:50 AM David Cunningham <dcunningham@xxxxxxxxxxxxx> wrote:
>>> Hello,
>>> 
>>> We have a replicated GlusterFS cluster, and my understanding is that the GlusterFS FUSE client will check the file with all nodes before doing a read.
>>> 
>>> For our application it is not actually critical to be certain of having the latest version of a file, and it would be preferable to speed up the read by simply reading the file from the fastest node. This would be especially beneficial if some of the other nodes have higher latency from the client.
>> 
>> How do you define, in real time, per file, which is the fastest node?
>> Maybe you are looking for read-hash-mode volume option?
>> Y.
>> 
>>> 
>>> Is it possible to do this? Thanks in advance for any assistance.
>>> 
>>> -- 
>>> David Cunningham, Voisonics Limited
>>> http://voisonics.com/
>>> USA: +1 213 221 1092
>>> New Zealand: +64 (0)28 2558 3782
>>> ________
>>> 
>>> 
>>> 
>>> Community Meeting Calendar:
>>> 
>>> Schedule -
>>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>>> Bridge: https://meet.google.com/cpu-eiue-hvk
>>> Gluster-users mailing list
>>> Gluster-users@xxxxxxxxxxx
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>> 
>> 
> 
> 
> -- 
> David Cunningham, Voisonics Limited
> http://voisonics.com/
> USA: +1 213 221 1092
> New Zealand: +64 (0)28 2558 3782
> 

________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux