Re: [Best practise] Adding new data center

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 01/29/2018 06:33 PM, Nico Schottelius wrote:

Good evening list,

we are soon expanding our data center [0] to a new location [1].

We are mainly offering VPS / VM Hosting, so rbd is our main interest.
We have a low latency 10 Gbit/s link between our other location [2] and
we are wondering, what is the best practise for expanding.


What is 'low latency'? If you are not using RBD mirroring and are trying to span a Ceph cluster over two DCs you will usually run into latency problems.

Any increase in latency will lower the IOps and decrease performance.

Naturally we think about creating a new ceph cluster that is independent
from the first location, so connection interrupts (unlikely) or
different power outages (more likely) are becoming a concern.

Given that we running two different ceph clusters, we think about rbd
mirroring, so that we can (partially) mirror one side to the other or
vice versa.

However using this approach we lose the possibility to have very big rbd
images (big as in 10ths to 100ds of TBs), as the storage is divided.

My question to the list is, how have you handled this situation so far?

Would you also recommend splitting or have you expanded ceph clusters
over several kilometers of range so far? With what experiences?


Like I said, latency, latency, latency. That's what matters. Bandwidth usually isn't a real problem.

What latency do you have with a 8k ping between hosts?

Wido

I am very curious to hear your answers!

Best,

Nico



[0] https://datacenterlight.ch
[1] Linthal, in pretty Glarus
     https://www.google.ch/maps/place/Linthal,+8783+Glarus+S%C3%BCd/
[2] Schwanden, also pretty
     https://www.google.ch/maps/place/Schwanden,+8762+Glarus+S%C3%BCd/

--
Modern, affordable, Swiss Virtual Machines. Visit www.datacenterlight.ch
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux