On Sun, Feb 17, 2019 at 9:51 PM <jesper@xxxxxxxx> wrote: > > > Probably not related to CephFS. Try to compare the latency you are > > seeing to the op_r_latency reported by the OSDs. > > > > The fast_read option on the pool can also help a lot for this IO pattern. > > Magic, that actually cut the read-latency in half - making it more > aligned with what to expect from the HW+network side: > > N Min Max Median Avg Stddev > x 100 0.015687 0.221538 0.025253 0.03259606 0.028827849 > > 25ms as a median, 32ms average is still on the high side, > but way, way better. I'll use this opportunity to point out that serial archive programs like tar are terrible for distributed file systems. It would be awesome if someone multithreaded tar or extended it for asynchronous I/O. If only I had more time (TM)... -- Patrick Donnelly _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com