[ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Gregory Farnum <greg at ...> writes:

> So those disks are actually different sizes, in proportion to their
> weights? It could be having an impact on this, although it *shouldn't*
> be an issue. And your tree looks like it's correct, which leaves me
> thinking that something is off about your crush rules. :/
> Anyway, having looked at that, what are your crush rules? ("ceph osd
> crush dump" will provide that and some other useful data in json
> format. I checked the command this time.)
> And can you run "ceph pg dump" and put that on pastebin for viewing?
> -Greg
> Software Engineer #42  <at>  http://inktank.com | http://ceph.com
> 

At the time I posted, I was using the default crush map. I just setup a new 
crush map with an ssd pool and a sata pool. Here is the tree:

ceph osd tree
# id    weight  type name       up/down reweight
-6      4       root ssd
-1      2               host ceph-ssd-01
1       1                       osd.1   up      1
2       1                       osd.2   up      1
-2      2               host ceph-ssd-02
5       1                       osd.5   up      1
6       1                       osd.6   up      1
-5      4       root sata
-3      2               host ceph-sata-01
0       1                       osd.0   up      1
3       1                       osd.3   up      1
-4      2               host ceph-sata-02
4       1                       osd.4   up      1
7       1                       osd.7   up      1

(which all looks correct. The ssd's are in the proper places, as are the 
sata disks).

Here is my now non default crush map that I have edited:

[root at monitor01 ceph]# ceph osd crush dump
{ "devices": [
        { "id": 0,
          "name": "osd.0"},
        { "id": 1,
          "name": "osd.1"},
        { "id": 2,
          "name": "osd.2"},
        { "id": 3,
          "name": "osd.3"},
        { "id": 4,
          "name": "osd.4"},
        { "id": 5,
          "name": "osd.5"},
        { "id": 6,
          "name": "osd.6"},
        { "id": 7,
          "name": "osd.7"}],
  "types": [
        { "type_id": 0,
          "name": "osd"},
        { "type_id": 1,
          "name": "host"},
        { "type_id": 2,
          "name": "chassis"},
        { "type_id": 3,
          "name": "rack"},
        { "type_id": 4,
          "name": "row"},
        { "type_id": 5,
          "name": "pdu"},
        { "type_id": 6,
          "name": "pod"},
        { "type_id": 7,
          "name": "room"},
        { "type_id": 8,
          "name": "datacenter"},
        { "type_id": 9,
          "name": "region"},
        { "type_id": 10,
          "name": "root"}],
  "buckets": [
        { "id": -1,
          "name": "ceph-ssd-01",
          "type_id": 1,
          "type_name": "host",
          "weight": 131072,
          "alg": "straw",
          "hash": "rjenkins1",
          "items": [
                { "id": 1,
                  "weight": 65536,
                  "pos": 0},
                { "id": 2,
                  "weight": 65536,
                  "pos": 1}]},
        { "id": -2,
          "name": "ceph-ssd-02",
          "type_id": 1,
          "type_name": "host",
          "weight": 131072,
          "alg": "straw",
          "hash": "rjenkins1",
          "items": [
                { "id": 5,
                  "weight": 65536,
                  "pos": 0},
                { "id": 6,
                  "weight": 65536,
                  "pos": 1}]},
        { "id": -3,
          "name": "ceph-sata-01",
          "type_id": 1,
          "type_name": "host",
          "weight": 131072,
          "alg": "straw",
          "hash": "rjenkins1",
          "items": [
                { "id": 0,
                  "weight": 65536,
                  "pos": 0},
                { "id": 3,
                  "weight": 65536,
                  "pos": 1}]},
        { "id": -4,
          "name": "ceph-sata-02",
          "type_id": 1,
          "type_name": "host",
          "weight": 131072,
          "alg": "straw",
          "hash": "rjenkins1",
          "items": [
                { "id": 4,
                  "weight": 65536,
                  "pos": 0},
                { "id": 7,
                  "weight": 65536,
                  "pos": 1}]},
        { "id": -5,
          "name": "sata",
          "type_id": 10,
          "type_name": "root",
          "weight": 262144,
          "alg": "straw",
          "hash": "rjenkins1",
          "items": [
                { "id": -3,
                  "weight": 131072,
                  "pos": 0},
                { "id": -4,
                  "weight": 131072,
                  "pos": 1}]},
        { "id": -6,
          "name": "ssd",
          "type_id": 10,
          "type_name": "root",
          "weight": 262144,
          "alg": "straw",
          "hash": "rjenkins1",
          "items": [
                { "id": -1,
                  "weight": 131072,
                  "pos": 0},
                { "id": -2,
                  "weight": 131072,
                  "pos": 1}]}],
  "rules": [
        { "rule_id": 0,
          "rule_name": "data",
          "ruleset": 0,
          "type": 1,
          "min_size": 2,
          "max_size": 2,
          "steps": [
                { "op": "take",
                  "item": -5,
                  "item_name": "sata"},
                { "op": "chooseleaf_firstn",
                  "num": 0,
                  "type": "host"},
                { "op": "emit"}]},
        { "rule_id": 1,
          "rule_name": "metadata",
          "ruleset": 1,
          "type": 1,
          "min_size": 0,
          "max_size": 10,
          "steps": [
                { "op": "take",
                  "item": -5,
                  "item_name": "sata"},
                { "op": "chooseleaf_firstn",
                  "num": 0,
                  "type": "host"},
                { "op": "emit"}]},
        { "rule_id": 2,
          "rule_name": "rbd",
          "ruleset": 2,
          "type": 1,
          "min_size": 0,
          "max_size": 10,
          "steps": [
                { "op": "take",
                  "item": -5,
                  "item_name": "sata"},
                { "op": "chooseleaf_firstn",
                  "num": 0,
                  "type": "host"},
                { "op": "emit"}]},
        { "rule_id": 3,
          "rule_name": "sata",
          "ruleset": 3,
          "type": 1,
          "min_size": 0,
          "max_size": 10,
          "steps": [
                { "op": "take",
                  "item": -5,
                  "item_name": "sata"},
                { "op": "chooseleaf_firstn",
                  "num": 0,
                  "type": "host"},
                { "op": "emit"}]},
        { "rule_id": 4,
          "rule_name": "ssd",
          "ruleset": 4,
          "type": 1,
          "min_size": 0,
          "max_size": 4,
          "steps": [
                { "op": "take",
                  "item": -6,
                  "item_name": "ssd"},
                { "op": "chooseleaf_firstn",
                  "num": 0,
                  "type": "host"},
                { "op": "emit"}]},
        { "rule_id": 5,
          "rule_name": "ssd-primary",
          "ruleset": 5,
          "type": 1,
          "min_size": 5,
          "max_size": 10,
          "steps": [
                { "op": "take",
                  "item": -6,
                  "item_name": "ssd"},
                { "op": "chooseleaf_firstn",
                  "num": 1,
                  "type": "host"},
                { "op": "emit"},
                { "op": "take",
                  "item": -5,
                  "item_name": "sata"},
                { "op": "chooseleaf_firstn",
                  "num": -1,
                  "type": "host"},
                { "op": "emit"}]}],
  "tunables": { "choose_local_tries": 0,
      "choose_local_fallback_tries": 0,
      "choose_total_tries": 50,
      "chooseleaf_descend_once": 1,
      "profile": "bobtail",
      "optimal_tunables": 0,
      "legacy_tunables": 0,
      "require_feature_tunables": 1,
      "require_feature_tunables2": 1}}
[root at monitor01 ceph]#

This is pretty much copied and pasted from the ceph documentation here: 
http://ceph.com/docs/master/rados/operations/crush-map/#placing-different-
pools-on-different-osds

I then setup my cache pool following this replacing the foo pool with sata 
and foo-hot with the ssd pool: http://ceph.com/docs/firefly/dev/cache-pool/

Here is the pg dump: http://pastebin.com/Vgpq6bxY




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux