Has anyone tried to operate (or simulated) a cluster over various link speeds? There was a mailing list question months ago about ceph over "WAN" and the consensus was that it would not perform well - but there's a broad spectrum of link speeds and latencies in the real world - LAN and WAN are pretty blurry these days, especially in the datacenters were large clusters will live. The wiki talks about using crush maps to spread data out over different racks and servers, but there seems to be no reason crush couldn't be used to place data copies in different floors or nearby datacenters (think Equinix DC2 and DC7 in Washington - 1-3ms 'cross-connect' latency, large pipes). Thinking more broadly, the latency between DC7 and home users in Richmond, Virginia is 10-15ms, where a user with FiOS can easily get 15Mbps real speed in both directions. What is the performance impact of bandwidth and latency, especially between complete replica-sets? How far can ceph be pushed and what starts to hurt first? Does an entire cluster have to be in the same data center, the same city, or the same state? Matthew -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html