And also make sure the OSD<-> mapping is correct with "ceph osd tree". :)
On Fri, May 4, 2018 at 1:44 AM Matthew Vernon <mv3@xxxxxxxxxxxx> wrote:
Hi,
On 04/05/18 08:25, Tracy Reed wrote:
> On Fri, May 04, 2018 at 12:18:15AM PDT, Tracy Reed spake thusly:
>> https://jcftang.github.io/2012/09/06/going-from-replicating-across-osds-to-replicating-across-hosts-in-a-ceph-cluster/
> <snip>
>> How can I tell which way mine is configured? I could post the whole
>> crushmap if necessary but it's a bit large to copy and paste.
>
> To further answer my own question (sorry for the spam) the above linked
> doc says this should do what I want:
>
> step chooseleaf firstn 0 type host
>
> which is what I already have in my crush map. So it looks like the
> default is as I want it. In which case I wonder why I had the problem
> previously... I guess the only way to know for sure is to stop one osd
> node and see what happens.
You can ask ceph which OSDs a particular pg is on:
root@sto-1-1:~# ceph pg map 71.983
osdmap e435728 pg 71.983 (71.983) -> up [1948,2984,511] acting
[1948,2984,511]
...then you can check these are on different hosts.
HTH,
Matthew
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com