Re: fast_read in EC pools

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I don’t actually know this option, but based on your results it’s clear that “fast read” is telling the OSD it should issue reads to all k+m OSDs storing data and then reconstruct the data from the first k to respond. Without the fast read it simply asks the regular k data nodes to read it back straight and sends the reply back. This is a straight trade off of more bandwidth for lower long-tail latencies.
-Greg
On Mon, Feb 26, 2018 at 3:57 AM Oliver Freyermuth <freyermuth@xxxxxxxxxxxxxxxxxx> wrote:
Some additional information gathered from our monitoring:
It seems fast_read does indeed become active immediately, but I do not understand the effect.

With fast_read = 0, we see:
~ 5.2 GB/s total outgoing traffic from all 6 OSD hosts
~ 2.3 GB/s total incoming traffic to all 6 OSD hosts

With fast_read = 1, we see:
~ 5.1 GB/s total outgoing traffic from all 6 OSD hosts
~ 3   GB/s total incoming traffic to all 6 OSD hosts

I would have expected exactly the contrary to happen...

Cheers,
        Oliver

Am 26.02.2018 um 12:51 schrieb Oliver Freyermuth:
> Dear Cephalopodians,
>
> in the few remaining days when we can still play at our will with parameters,
> we just now tried to set:
> ceph osd pool set cephfs_data fast_read 1
> but did not notice any effect on sequential, large file read throughput on our k=4 m=2 EC pool.
>
> Should this become active immediately? Or do OSDs need a restart first?
> Is the option already deemed safe?
>
> Or is it just that we should not expect any change on throughput, since our system (for large sequential reads)
> is purely limited by the IPoIB throughput, and the shards are nevertheless requested by the primary OSD?
> So the gain would not be in throughput, but the reply to the client would be slightly faster (before all shards have arrived)?
> Then this option would be mainly of interest if the disk IO was congested (which does not happen for us as of yet)
> and not help so much if the system is limited by network bandwidth.
>
> Cheers,
>       Oliver
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux