Re: Newbie question: stretch ceph cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Feb 9, 2018 at 2:59 PM, Kai Wagner <kwagner@xxxxxxxx> wrote:
> Hi and welcome,
>
>
> On 09.02.2018 15:46, ST Wong (ITSC) wrote:
>
> Hi, I'm new to CEPH and got a task to setup CEPH with kind of DR feature.
> We've 2 10Gb connected data centers in the same campus.    I wonder if it's
> possible to setup a CEPH cluster with following components in each data
> center:
>
>
> 3 x mon + mds + mgr
In this scenario you wouldn't be any better, as loosing a room means
loosing half of your cluster. Can you run the MON somewhere else that
would be able to continue if you loose one of the rooms?

As for MGR and MDS they're (recommended) active/passive; so one per
room would be enough.
>
> 3 x OSD (replicated factor=2, between data center)

replicated with size=2 is a bad idea. You can have size=4 and
min_size=2 and have a crush map with rules something like:

rule crosssite {
        id 0
        type replicated
        min_size 4
        max_size 4
        step take default
        step choose firstn 2 type room
        step chooseleaf firstn 2 type host
        step emit
}

this will store 4 copies, 2 in different hosts and 2 different rooms.

>
>
> So that any one of following failure won't affect the cluster's operation
> and data availability:
>
> any one component in either data center
> failure of either one of the data center
>
>
> Is it possible?
>
> In general this is possible, but I would consider that replica=2 is not a
> good idea. In case of a failure scenario or just maintenance and one DC is
> powered off and just one single disk fails on the other DC, this can already
> lead to data loss. My advice here would be, if anyhow possible, please don't
> do replica=2.
>
> In case one data center failure case, seems replication can't occur any
> more.   Any CRUSH rule can achieve this purpose?
>
>
> Sorry for the newbie question.
>
>
> Thanks a lot.
>
> Regards
>
> /st wong
>
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> --
> SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB
> 21284 (AG Nürnberg)
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux