Re: Optimizations on "high" latency Ceph clusters

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

What makes this cluster a non-local cluster?

It's hosted in OVH's 3AZ, with each host in a different DC, each at around 30-60km's away from each other, hence the relatively high latency.


0'6 and 1 millisecond RTT latency seems too high for all-flash clusters and intense 4K write workloads.

I'm fully aware of the limitations imposed by the latency. I was wondering if there is something that could be done to improve performance under this conditions. Measured performance is more than enough for the workloads that the cluster will host, as 4k QD=1 sync writes/reads are not the main I/O pattern.

The upmap-read or read balancer modes may help with reads but not writes where 1.2ms+ latency will still be observed.

AFAIK upmap-read isn't available in Reef, at least does not show up in the docs [1].

Thanks!


[1] https://docs.ceph.com/en/reef/rados/operations/balancer/

Regards,
Frédéric.

----- Le 1 Oct 24, à 18:23, Victor Rodriguezvrodriguez@xxxxxxxxxxxxx  a écrit :

Hello,

I'm trying to get the most from a Ceph Reef 3 node clusters, with 6 NVMe
OSD each. Each node is between 0'6 and 1 millisecond RTT. Obviously
performance isn't as good as with local clusters, usually around ~0'2ms.
4k in Q1D1 I/O write request are the most affected by this, as expected,
with a performance loss of around 70% from an all local cluster with
similar hardware. In general, measurements match the increased latency.

Are there any guidelines about what to tune or adjust to get the most of
this kind of setups with "high" latencies?

Thanks in advance!

Victor

_______________________________________________
ceph-users mailing list --ceph-users@xxxxxxx
To unsubscribe send an email toceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux