Re: Minimum failure domain

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The classic case is when you are just trying Ceph out on a laptop (e.g., using file directories for OSDs, setting the replica size to 2, and setting osd_crush_chooseleaf_type to 0).

The statement is a guideline. You could, in fact, create a CRUSH hierachy consisting of OSD/journal groups within a host too. However, capturing the host as a failure domain is preferred if you need to power down the host to change a drive (assuming it's not hot-swappable).

There are cases with high density systems where you have multiple nodes in the same chassis. So you might opt for a higher minimum failure domain in a case like that.
There are also cases in larger clusters where you might have, for example, three racks of servers with three top-of-rack switches--one for each rack. If you want to isolate out the top of rack switch as a failure domain, you will want to add the nodes/chassis to a rack within your CRUSH hierarchy, and then select the rack level as your minimum failure domain. In those scenarios, Ceph primary OSDs will replicate your copies to OSDs on secondary nodes across chassis or racks respectively.

On Thu, Oct 15, 2015 at 1:55 PM, J David <j.david.lists@xxxxxxxxx> wrote:
In the Ceph docs, at:

http://docs.ceph.com/docs/master/rados/deployment/ceph-deploy-osd/

It says (under "Prepare OSDs"):

"Note: When running multiple Ceph OSD daemons on a single node, and
sharing a partioned journal with each OSD daemon, you should consider
the entire node the minimum failure domain for CRUSH purposes, because
if the SSD drive fails, all of the Ceph OSD daemons that journal to it
will fail too."

This, of course, makes perfect sense.  But, it got me wondering...
under what circumstances would one *not* consider a single node to be
the minimum failure domain for CRUSH purposes?

Thanks!
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
John Wilkins
Red Hat
jowilkin@xxxxxxxxxx
(415) 425-9599
http://redhat.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux