Switch from "default" replicated_ruleset to separated rules: what happens with existing pool ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

we have:

Ceph version: Jewel
Hosts: 6
OSDs per Host: 12
OSDs type: 6 SATA / 6 SSD

We started with a "generic" pool on our SSDs. Now we added the SATA OSDs on the same hosts. We reassign the hierarchic:

==================
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-19 16.37952 root sata
 -9 16.37952     datacenter qh-sata
-10 16.37952         rack a07-sata
-12  2.72992             host qh-a07-ceph-osd-06-sata
100 0.45499 osd.100 up 1.00000 1.00000 101 0.45499 osd.101 up 1.00000 1.00000 102 0.45499 osd.102 up 1.00000 1.00000 103 0.45499 osd.103 up 1.00000 1.00000 104 0.45499 osd.104 up 1.00000 1.00000 105 0.45499 osd.105 up 1.00000 1.00000
 -7  2.72992             host qh-a07-ceph-osd-01-sata
0 0.45499 osd.0 up 1.00000 1.00000 1 0.45499 osd.1 up 1.00000 1.00000 2 0.45499 osd.2 up 1.00000 1.00000 3 0.45499 osd.3 up 1.00000 1.00000 4 0.45499 osd.4 up 1.00000 1.00000 5 0.45499 osd.5 up 1.00000 1.00000

[...]

  -1 16.37952 root ssds
-13 16.37952     datacenter qh-ssds
-14 16.37952         rack a07-ssds
-15  2.72992             host qh-a07-ceph-osd-06-ssds
94 0.45499 osd.94 up 1.00000 1.00000 95 0.45499 osd.95 up 1.00000 1.00000 96 0.45499 osd.96 up 1.00000 1.00000 97 0.45499 osd.97 up 1.00000 1.00000 98 0.45499 osd.98 up 1.00000 1.00000 99 0.45499 osd.99 up 1.00000 1.00000
 -6  2.72992             host qh-a07-ceph-osd-05-ssds
72 0.45499 osd.72 up 1.00000 1.00000 77 0.45499 osd.77 up 1.00000 1.00000 81 0.45499 osd.81 up 1.00000 1.00000 85 0.45499 osd.85 up 1.00000 1.00000 89 0.45499 osd.89 up 1.00000 1.00000 93 0.45499 osd.93 up 1.00000 1.00000
===================

We created two new rules, as all the howtos says:

===================
# ceph osd crush rule dump
[
    {
        "rule_id": 0,
        "rule_name": "replicated_ruleset",
        "ruleset": 0,
        "type": 1,
        "min_size": 1,
        "max_size": 10,
        "steps": [
            {
                "op": "take",
                "item": -1,
                "item_name": "ssds"
            },
            {
                "op": "chooseleaf_firstn",
                "num": 0,
                "type": "host"
            },
            {
                "op": "emit"
            }
        ]
    },
    {
        "rule_id": 1,
        "rule_name": "qh-a07-satapool",
        "ruleset": 1,
        "type": 1,
        "min_size": 1,
        "max_size": 10,
        "steps": [
            {
                "op": "take",
                "item": -10,
                "item_name": "a07-sata"
            },
            {
                "op": "chooseleaf_firstn",
                "num": 0,
                "type": "rack"
            },
            {
                "op": "emit"
            }
        ]
    },
    {
        "rule_id": 2,
        "rule_name": "qh-a07-ssdpool",
        "ruleset": 2,
        "type": 1,
        "min_size": 1,
        "max_size": 10,
        "steps": [
            {
                "op": "take",
                "item": -14,
                "item_name": "a07-ssds"
            },
            {
                "op": "chooseleaf_firstn",
                "num": 0,
                "type": "rack"
            },
            {
                "op": "emit"
            }
        ]
    }
]
===================

Our existing pools:

4 .rgw.root,5 default.rgw.control,6 default.rgw.data.root,7 default.rgw.gc,8 default.rgw.log,9 default.rgw.users.uid,10 default.rgw.users.email,11 default.rgw.users.keys,12 default.rgw.users.swift,13 ssd,


ceph df:

GLOBAL:
    SIZE       AVAIL      RAW USED     %RAW USED
    50097G     49703G         393G          0.79
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS .rgw.root 4 1588 0 5444G 4 default.rgw.control 5 0 0 5444G 8 default.rgw.data.root 6 0 0 5444G 0 default.rgw.gc 7 0 0 5444G 0 default.rgw.log 8 0 0 5444G 0 default.rgw.users.uid 9 520 0 5444G 1 default.rgw.users.email 10 10 0 5444G 1 default.rgw.users.keys 11 10 0 5444G 1 default.rgw.users.swift 12 10 0 5444G 1 ssd 13 129G 2.32 5444G 39157


===================

Ceph health:

root@qh-a07-ceph-osd-06:~# ceph -s
    cluster 6ded156c-5855-4974-8ff6-be2332ba3a51
     health HEALTH_OK
monmap e6: 6 mons at {0=10.3.0.1:6789/0,1=10.3.0.2:6789/0,2=10.3.0.3:6789/0,3=10.3.0.4:6789/0,4=10.3.0.5:6789/0,5=10.3.0.6:6789/0}
            election epoch 96, quorum 0,1,2,3,4,5 0,1,2,3,4,5
     osdmap e4001: 72 osds: 72 up, 72 in
            flags sortbitwise,require_jewel_osds
      pgmap v1244347: 2120 pgs, 10 pools, 129 GB data, 39162 objects
            393 GB used, 49703 GB / 50097 GB avail
                2120 active+clean
  client io 27931 B/s wr, 0 op/s rd, 4 op/s wr

===================


I created a new pool with 2048 PGs (like the existing pool "ssds") and assigned the rule 1. Than we had a HEALTH WARN with something like "ceph is creating+peering, acting []" for 54PGs. (out of my mind ...) I'm pretty sure, that 54Pgs had no "free" OSDs for assigning them because the ruleset 0 inherits all OSDs instead of only under "a07-ssds". Is that correct?

What happens with the existing pool, if we do:

# ceph osd pool set sata crush_ruleset 1
# ceph osd pool set ssd crush_ruleset 2


So basically switching the "ssd" pool from rule_id = 0 to rule_id = 2 ?


cu denny
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux