Place on separate hosts?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I've been using ceph for nearly a year and one of the things I ran into
quite a while back was that it seems like ceph is placing copies of
objects on different OSDs but sometimes those OSDs can be on the same
host by default. Is that correct? I discovered this by taking down one
host and having some pgs become inactive. 

So I guess you could say I want my failure domain to be the host, not
the OSD.

How would I accomplish this? I understand it involves changing the crush
map.  I've been reading over
http://docs.ceph.com/docs/master/rados/operations/crush-map/ and it
still isn't clear to me what needs to change. I expect I need to change
the default replicated_ruleset which I'm still running:

$ ceph osd crush rule dump
[
    {
        "rule_id": 0,
        "rule_name": "replicated_ruleset",
        "ruleset": 0,
        "type": 1,
        "min_size": 1,
        "max_size": 10,
        "steps": [
            {
                "op": "take",
                "item": -1,
                "item_name": "default"
            },
            {
                "op": "chooseleaf_firstn",
                "num": 0,
                "type": "host"
            },
            {
                "op": "emit"
            }
        ]
    }
]


And that I need something like:

ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class>

then:

ceph osd pool set <pool-name> crush_rule <rule-name>

but I'm not sure what the values of <root> <failure-domain> <class>
would be in my situation. Maybe:

ceph osd crush rule create-replicated different-host default <failure-domain> <class>

but I don't know what failure-domain or class should just by inspecting
my current crush map.

Suggestions are greatly appreciated!

-- 
Tracy Reed
http://tracyreed.org
Digital signature attached for your safety.

Attachment: signature.asc
Description: PGP signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux