Re: Single Cluster / Reduced Failure Domains

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Spell check fail, that of course should have read CRUSH map.


Sent from Samsung Mobile



-------- Original message --------
From: harri <harri@xxxxxxxxxxxxxx>
Date: 18/06/2013 19:21 (GMT+00:00)
To: Gregory Farnum <greg@xxxxxxxxxxx>
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re: Single Cluster / Reduced Failure Domains


Thanks Greg,

The concern I have is an "all eggs in one basket" approach to storage design. Is it feasible, however unlikely, that a single Ceph cluster could be brought down (obviously yes)? And what if you wanted to operate different storage networks?

It feels right to build virtual environments in a modular design with compute and storage designed to run a set amount of vm's, and then scale that modular design by building new separate modules or pods when you need more vm's. 

The benefits of Ceph seem to get better when more commodity hardware is added but I'm wondering if it would be workable to build multiple Ceph clusters according to a modular design (still getting replication and self healing features but on a smaller scale per pod - assume there would be enough hardware to achieve performance).

I wonder do Dreamhosts run all VM's on the same Ceph cluster? 

I appreciate Ceph is a different mindset from traditional SAN design and I want to explore design concepts before implementation. I understand you can create separation usint placement groups and CRUSU mapping but thats all in the same cluster.

Regards, 

Lee.




Sent from Samsung Mobile



-------- Original message --------
From: Gregory Farnum <greg@xxxxxxxxxxx>
Date: 18/06/2013 17:02 (GMT+00:00)
To: harri <harri@xxxxxxxxxxxxxx>
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re: Single Cluster / Reduced Failure Domains


On Tuesday, June 18, 2013, harri wrote:

Hi,

 

I wondered what best practice is recommended to reducing failure domains for a virtual server platform. If I wanted to run multiple virtual server clusters then would it be feasible to serve storage from 1 x large Ceph cluster?


I'm a bit confused by your question here. Normally you want as many defined failure domains as possible to best tolerate those failures without data loss.
 

 

I am concerned that, in the unlikely event the Ceph whole cluster fails, then ALL my VM's would be offline.


Well, yes?
 

 

Is there anyway to ring-fence failure domains within a logical Ceph cluster or would you instead look to build multiple Ceph clusters (but then that defeats the object of the technology doesn't it?)?


You can separate your OSDs into different CRUSH buckets and thn assign different pools to draw from those buckets if you're trying to split up your storage somehow. But I'm still a little confused about what you're after. :)
-Greg


--
Software Engineer #42 @ http://inktank.com | http://ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux