Re: Use 2 osds to create cluster but health check display "active+degraded"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Righty, both osd are on the same host, so you will need to amend the default crush rule. It will look something like:

rule replicated_ruleset {
        ruleset 0
        type replicated
        min_size 1
        max_size 10
        step take default
        step chooseleaf firstn 0 type host   <=== ah! host!
        step emit

So you will need to change host to osd.

See http://ceph.com/docs/master/rados/operations/crush-map/ for a discussion of what/how on this front!

Regards

Mark

On 29/10/14 22:19, Vickie CH wrote:
Hi:
-----------------------------ceph osd
tree-----------------------------------
# id    weight  type name       up/down reweight
-1      1.82    root default
-2      1.82            host storage1
0       0.91                    osd.0   up      1
1       0.91                    osd.1   up      1


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux