On Tue, 29 May 2012, Tommi Virtanen wrote:
For example: with data replicated over data centers A, B, C, connected at 1Gb/s, the fastest all of A will ever handle writes is 0.5Gb/s -- it'll need to replicate everything to B and C, over that single pipe. I am aware of a few people building multi-dc Ceph clusters. Some have shared their network latency, bandwidth and availability numbers with me (confidentially), and at first glance their wide-area network performs better than many single-dc networks. They are far above a 1 gigabit interconnect.
We have currently a mixed 1 Gbit/s and 10 Gbit/s metropolitan network with 0.5 ms latency between the sites - usually two router hops and a couple of switches. Latency in a site over a single router/switch and a few switches is 0.1-0.2 ms. Normal ethernet.
If bandwidth is a limiting factor it should not be to hard to bond multiple 10 Gbit/s ethernet between the sites that have been upgraded to 10 Gbit/s.
I would really recommend you embark on a project like this only if you are able to understand the Ceph replication model, and do the math for yourself and figure out what your expected service levels for Ceph operations would be. (Naturally, Inktank Professional Services will help you in your endeavors, though their first response should be "that's not a recommended setup".)
I am aware of the replication model. It, together with potentially seamless reliability and scalability, makes Ceph interesting.
The Ceph Distributed File System is not considered production ready yet.
I am waiting as fast as I can for it to be production ready. :-) Good luck to you all. --jerker -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html