Re: concept of ceph and 2 datacenters

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



hi vladimir,
thanks for answering ... of cause, we will build an 3 dc (tiebraker or server) setup.

i'm not sure, what to do with "disaster recovery".
is it real, that a ceph cluster can be completly broken?

kind regards,
ronny

--
Ronny Lippold
System Administrator

--
Spark 5 GmbH
Rheinstr. 97
64295 Darmstadt
Germany
--
Fon: +49-6151-8508-050
Fax: +49-6151-8508-111
Mail: ronny.lippold@xxxxxxxxx
Web: https://www.spark5.de
--
Geschäftsführer: Henning Munte, Michael Mylius
Amtsgericht Darmstadt, HRB 7809
--

Am 2024-02-14 06:59, schrieb Vladimir Sigunov:
Hi Ronny,
This is a good starting point for your design.
https://docs.ceph.com/en/latest/rados/operations/stretch-mode/

My personal experience says that 2 DC Ceph deployment could suffer
from a 'split brain' situation. If you have any chance to create a 3
DC configuration, I would suggest to consider it. It could be more
expensive, but it definitely will be more reliable and fault tolerant.

Sincerely,
Vladimir

Get Outlook for Android<https://aka.ms/AAb9ysg>
________________________________
From: ronny.lippold@xxxxxxxxx <ronny.lippold@xxxxxxxxx>
Sent: Tuesday, February 13, 2024 6:50:50 AM
To: ceph-users@xxxxxxx <ceph-users@xxxxxxx>
Subject:  concept of ceph and 2 datacenters

hi there,
i have a design/concept question, to see, whats outside and which kind
of redundancy you use.

actually, we use 2 ceph cluster with rbd-mirror to have an cold-standby
clone.
but, rbd mirror is not application consistend. so we cannot be sure,
that all vms (kvm/proxy) are running.
we also waste a lot of hardware.

so now, we think about one big cluster over the two datacenters (two
buildings).

my queston is, do you care about ceph redundancy or is one ceph with
backups enough for you?
i know, with ceph, we are aware of hdd or server failure. but, are
software failures a real scenario?

would be great, to get some ideas from you.
also about the bandwidth between the 2 datacenters.
we are using 2x 6 proxmox server with 2x6x9 osd (sas ssd).

thanks for help, my minds are rotating.

kind regards,
ronny


--
Ronny Lippold
System Administrator

--
Spark 5 GmbH
Rheinstr. 97
64295 Darmstadt
Germany
--
Fon: +49-6151-8508-050
Fax: +49-6151-8508-111
Mail: ronny.lippold@xxxxxxxxx
Web: https://www.spark5.de
--
Geschäftsführer: Henning Munte, Michael Mylius
Amtsgericht Darmstadt, HRB 7809
--
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux