Drop old SDD / HDD Host crushmap rules

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

i have from the beginning on one DC very old crush map rules, to split
HDD and SSD disks. It is obsolete since Luminous and I want to drop them:

# ceph osd crush rule ls

replicated_rule
fc-r02-ssdpool
fc-r02-satapool
fc-r02-ssd

=================
[
    {
        "rule_id": 0,
        "rule_name": "replicated_rule",
        "ruleset": 0,
        "type": 1,
        "min_size": 1,
        "max_size": 10,
        "steps": [
            {
                "op": "take",
                "item": -1,
                "item_name": "default"
            },
            {
                "op": "chooseleaf_firstn",
                "num": 0,
                "type": "host"
            },
            {
                "op": "emit"
            }
        ]
    },
    {
        "rule_id": 1,
        "rule_name": "fc-r02-ssdpool",
        "ruleset": 1,
        "type": 1,
        "min_size": 1,
        "max_size": 10,
        "steps": [
            {
                "op": "take",
                "item": -15,
                "item_name": "r02-ssds"
            },
            {
                "op": "chooseleaf_firstn",
                "num": 0,
                "type": "host"
            },
            {
                "op": "emit"
            }
        ]
    },
    {
        "rule_id": 2,
        "rule_name": "fc-r02-satapool",
        "ruleset": 2,
        "type": 1,
        "min_size": 1,
        "max_size": 10,
        "steps": [
            {
                "op": "take",
                "item": -16,
                "item_name": "r02-sata"
            },
            {
                "op": "chooseleaf_firstn",
                "num": 0,
                "type": "host"
            },
            {
                "op": "emit"
            }
        ]
    },
    {
        "rule_id": 3,
        "rule_name": "fc-r02-ssd",
        "ruleset": 3,
        "type": 1,
        "min_size": 1,
        "max_size": 10,
        "steps": [
            {
                "op": "take",
                "item": -4,
                "item_name": "default~ssd"
            },
            {
                "op": "chooseleaf_firstn",
                "num": 0,
                "type": "host"
            },
            {
                "op": "emit"
            }
        ]
    }
]

==========================


# ceph osd tree

ID  CLASS WEIGHT   TYPE NAME                           STATUS REWEIGHT
PRI-AFF
-14              0 root sata

-18              0     datacenter fc-sata

-16              0         rack r02-sata

-13       18.55234 root ssds

-17       18.55234     datacenter fc-ssds

-15       18.55234         rack r02-ssds

 -6        3.09206             host fc-r02-ceph-osd-01

 41  nvme  0.36388                 osd.41                  up  1.00000
1.00000
  0   ssd  0.45470                 osd.0                   up  1.00000
1.00000
  1   ssd  0.45470                 osd.1                   up  1.00000
1.00000
  2   ssd  0.45470                 osd.2                   up  1.00000
1.00000
  3   ssd  0.45470                 osd.3                   up  1.00000
1.00000
  4   ssd  0.45470                 osd.4                   up  1.00000
1.00000
  5   ssd  0.45470                 osd.5                   up  1.00000
1.00000
 -2        3.09206             host fc-r02-ceph-osd-02

 36  nvme  0.36388                 osd.36                  up  1.00000
1.00000
  6   ssd  0.45470                 osd.6                   up  1.00000
1.00000
  7   ssd  0.45470                 osd.7                   up  1.00000
1.00000
  8   ssd  0.45470                 osd.8                   up  1.00000
1.00000
  9   ssd  0.45470                 osd.9                   up  1.00000
1.00000
 10   ssd  0.45470                 osd.10                  up  1.00000
1.00000
 29   ssd  0.45470                 osd.29                  up  1.00000
1.00000
 -5        3.45593             host fc-r02-ceph-osd-03

 38  nvme  0.36388                 osd.38                  up  1.00000
1.00000
 40  nvme  0.36388                 osd.40                  up  1.00000
1.00000
 11   ssd  0.45470                 osd.11                  up  1.00000
1.00000
 12   ssd  0.45470                 osd.12                  up  1.00000
1.00000
 13   ssd  0.45470                 osd.13                  up  1.00000
1.00000
 14   ssd  0.45470                 osd.14                  up  1.00000
1.00000
 15   ssd  0.45470                 osd.15                  up  1.00000
1.00000
 16   ssd  0.45470                 osd.16                  up  1.00000
1.00000
 -9        3.09206             host fc-r02-ceph-osd-04

 37  nvme  0.36388                 osd.37                  up  1.00000
1.00000
 30   ssd  0.45470                 osd.30                  up  1.00000
1.00000
 31   ssd  0.45470                 osd.31                  up  1.00000
1.00000
 32   ssd  0.45470                 osd.32                  up  1.00000
1.00000
 33   ssd  0.45470                 osd.33                  up  1.00000
1.00000
 34   ssd  0.45470                 osd.34                  up  1.00000
1.00000
 35   ssd  0.45470                 osd.35                  up  1.00000
1.00000
-11        2.72818             host fc-r02-ceph-osd-05

 17   ssd  0.45470                 osd.17                  up  1.00000
1.00000
 18   ssd  0.45470                 osd.18                  up  1.00000
1.00000
 19   ssd  0.45470                 osd.19                  up  1.00000
1.00000
 20   ssd  0.45470                 osd.20                  up  1.00000
1.00000
 21   ssd  0.45470                 osd.21                  up  1.00000
1.00000
 22   ssd  0.45470                 osd.22                  up  1.00000
1.00000
-25        3.09206             host fc-r02-ceph-osd-06

 39  nvme  0.36388                 osd.39                  up  1.00000
1.00000
 23   ssd  0.45470                 osd.23                  up  1.00000
1.00000
 24   ssd  0.45470                 osd.24                  up  1.00000
1.00000
 25   ssd  0.45470                 osd.25                  up  1.00000
1.00000
 26   ssd  0.45470                 osd.26                  up  1.00000
1.00000
 27   ssd  0.45470                 osd.27                  up  1.00000
1.00000
 28   ssd  0.45470                 osd.28                  up  1.00000
1.00000
 -1              0 root default


===================================================

the question is now: what is the best way, to drop them and use the
defaults from Ceph.

Any suggestions ?

cu denny




Attachment: OpenPGP_signature
Description: OpenPGP digital signature

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux