Re: Newbie question: stretch ceph cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

 

Thanks a lot,

 

From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Kai Wagner
Sent: Friday, February 09, 2018 11:00 PM
To: ceph-users@xxxxxxxxxxxxxx
Subject: Re: Newbie question: stretch ceph cluster

 

Hi and welcome,

 

On 09.02.2018 15:46, ST Wong (ITSC) wrote:

Hi, I'm new to CEPH and got a task to setup CEPH with kind of DR feature.  We've 2 10Gb connected data centers in the same campus.    I wonder if it's possible to setup a CEPH cluster with following components in each data center:

 

3 x mon + mds + mgr

3 x OSD (replicated factor=2, between data center)

 

So that any one of following failure won't affect the cluster's operation and data availability:

  • any one component in either data center
  • failure of either one of the data center 

 

Is it possible?

>In general this is possible, but I would consider that replica=2 is not a good idea. In case of a failure scenario or just maintenance and one DC is powered off and just one single disk fails on the other DC, this can already lead to data loss. My advice here would be, if anyhow possible, please don't do replica=2.

Then at least we’ve to do replica > 2, making replication between DC, and also among OSD in the same DC.   Is that  correct?   Thanks again.

 

 

In case one data center failure case, seems replication can't occur any more.   Any CRUSH rule can achieve this purpose?

 

Sorry for the newbie question.

 

Thanks a lot.

Regards

/st wong

 

 




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux