Re: Single Cluster / Reduced Failure Domains

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jun 18, 2013 at 09:02:12AM -0700, Gregory Farnum wrote:
> On Tuesday, June 18, 2013, harri wrote:
> 
> >  Hi, ** ****
> >
> > ** **
> >
> > I wondered what best practice is recommended to reducing failure domains
> > for a virtual server platform. If I wanted to run multiple virtual server
> > clusters then would it be feasible to serve storage from 1 x large Ceph
> > cluster?
> >
> >
> I'm a bit confused by your question here. Normally you want as many
> defined failure domains as possible to best tolerate those failures without
> data loss.
> 
> 
> >
> >
> > I am concerned that, in the unlikely event the Ceph whole cluster fails,
> > then *ALL *my VM's would be offline.
> >
> >
> Well, yes?
> 
> 
> >
> >
> > Is there anyway to ring-fence failure domains within a logical Ceph
> > cluster or would you instead look to build multiple Ceph clusters (but then
> > that defeats the object of the technology doesn't it?)?
> >
> >
> You can separate your OSDs into different CRUSH buckets and thn assign
> different pools to draw from those buckets if you're trying to split up
> your storage somehow. But I'm still a little confused about what you're
> after. :)
> -Greg
> 

I think I know what he means, because this is what I've been thinking:

The software (of the monitors) is/are the single point of failure.

For example when you do an upgrade of Ceph and your monitors fail because of the upgrade.

You will have down time.

Obviously, it isn't every day I upgrade the software of our SAN either.

But one of the reasons people seem to be moving to software more than 'hardware' is because of
flexibility. So they want to be able to update it.

I've had Ceph test installations fail an upgrade, I've had a 3 monitor setup lose 1 monitor
and follow the wrong procedure to get it back up and running.

I've seen others on the mailinglist asking for help after upgrade problems.

This is exactly why RBD incremental backup makes me happy, because it should be easier to
keep up to date copies/snapshots on multiple Ceph installations.

> >
> 
> -- 
> Software Engineer #42 @ http://inktank.com | http://ceph.com

> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux