On Tue, Jul 23, 2013 at 9:04 AM, Matthew Walster <matthew@xxxxxxxxxxx> wrote: > > That's fantastic, thanks. I'm assuming that 5ms is probably too much for the > OSDs -- do we have any idea/data as to the effect of latency on OSDs if they > were split over a similar distance? Or even a spread - 0.5ms, 1ms, 2ms etc. > Obviously this is a bit theoretical as you'd not want to have data that > could be local pulled from a far away data center over your expensive links > when in-datacenter would clearly be the better choice. I don't know if anybody's yet run with a setup that looks like this, but it should work fine from the cluster's perspective, even without tuning anything. The issue with higher-latency links is that those translate directly into client-visible latency for write ops (and read ops, if the primary is remote). If you have three osds that are separated by 5ms each and all hosting a PG, then your lower-bound latency for a write op is 10ms — 5 ms to send from the primary to the replicas, 5ms for them to ack back. -Greg Software Engineer #42 @ http://inktank.com | http://ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com