Re: Place on separate hosts?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, May 04, 2018 at 12:08:35AM PDT, Tracy Reed spake thusly:
> I've been using ceph for nearly a year and one of the things I ran into
> quite a while back was that it seems like ceph is placing copies of
> objects on different OSDs but sometimes those OSDs can be on the same
> host by default. Is that correct? I discovered this by taking down one
> host and having some pgs become inactive. 

Actually, this (admittedly ancient) document:

https://jcftang.github.io/2012/09/06/going-from-replicating-across-osds-to-replicating-across-hosts-in-a-ceph-cluster/

says "As the default CRUSH map replicates across OSD’s I wanted to try
replicating data across hosts just to see what would happen." This would
seem to align with my experience as far as the default goes. However,
this:

http://docs.ceph.com/docs/master/rados/operations/crush-map/

says:

"When you deploy OSDs they are automatically placed within the CRUSH map
under a host node named with the hostname for the host they are running
on. This, combined with the default CRUSH failure domain, ensures that
replicas or erasure code shards are separated across hosts and a single
host failure will not affect availability."

How can I tell which way mine is configured? I could post the whole
crushmap if necessary but it's a bit large to copy and paste.

-- 
Tracy Reed
http://tracyreed.org
Digital signature attached for your safety.

Attachment: signature.asc
Description: PGP signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux