Re: Single Cluster / Reduced Failure Domains

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jun 18, 2013 at 11:21 AM, harri <harri@xxxxxxxxxxxxxx> wrote:
> Thanks Greg,
>
> The concern I have is an "all eggs in one basket" approach to storage
> design. Is it feasible, however unlikely, that a single Ceph cluster could
> be brought down (obviously yes)? And what if you wanted to operate different
> storage networks?
>
> It feels right to build virtual environments in a modular design with
> compute and storage designed to run a set amount of vm's, and then scale
> that modular design by building new separate modules or pods when you need
> more vm's.
>
> The benefits of Ceph seem to get better when more commodity hardware is
> added but I'm wondering if it would be workable to build multiple Ceph
> clusters according to a modular design (still getting replication and self
> healing features but on a smaller scale per pod - assume there would be
> enough hardware to achieve performance).
>
> I wonder do Dreamhosts run all VM's on the same Ceph cluster?
>
> I appreciate Ceph is a different mindset from traditional SAN design and I
> want to explore design concepts before implementation. I understand you can
> create separation usint placement groups and CRUSU mapping but thats all in
> the same cluster.

Okay, got it. You can of course build multiple Ceph clusters on
separate hardware[1] if you like, but if they're all running the same
version of the software I'm not sure how much that buys you unless
each cluster is serving a very different workload.
Other than that I'm not sure what to say — the software is always
going to be a correlated point of failure in distributed systems
unless you could do RAID-1 across both Ceph and Gluster or something.
:p
-Greg
[1]: There are some of the basic hooks to allow multiple clusters on
the same node, but I don't think it's fully wired up through the whole
ecosystem yet.
Software Engineer #42 @ http://inktank.com | http://ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux