Re: Adding OSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I guess you have to add at leased 1 more ssd disk ssd-ceph02-05.
Do you use replicated size 3? 

Hth
Mehmet

Am 21. August 2020 23:01:36 MESZ schrieb jcharles@xxxxxxxxxxxx:
>here yiu are
>
>The osd that have been added is osd.40 ssd, and it's an nautilus
>cluster
>Thanks for helping
>
>------------- Crush Map
>
>ID  CLASS WEIGHT    (compat) TYPE NAME
>-26        33.37372          root HDD10
>-33         6.67474  6.67474     host HDD10-ceph01
>  1 hdd10   1.66869  1.66869         osd.1
>  5 hdd10   1.66869  1.66869         osd.5
> 10 hdd10   1.66869  1.66869         osd.10
> 14 hdd10   1.66869  1.66869         osd.14
>-34         6.67474  6.67474     host HDD10-ceph02
>  2 hdd10   1.66869  1.66869         osd.2
>  7 hdd10   1.66869  1.66869         osd.7
> 11 hdd10   1.66869  1.66869         osd.11
> 15 hdd10   1.66869  1.66869         osd.15
>-35         6.67474  6.67474     host HDD10-ceph03
>  0 hdd10   1.66869  1.66869         osd.0
>  6 hdd10   1.66869  1.66869         osd.6
> 12 hdd10   1.66869  1.66869         osd.12
> 16 hdd10   1.66869  1.66869         osd.16
>-36         6.67474  6.67474     host HDD10-ceph04
>  3 hdd10   1.66869  1.66869         osd.3
>  8 hdd10   1.66869  1.66869         osd.8
> 13 hdd10   1.66869  1.66869         osd.13
> 17 hdd10   1.66869  1.66869         osd.17
>-37         6.67474  6.67474     host HDD10-ceph05
>  4 hdd10   1.66869  1.66869         osd.4
> 34 hdd10   1.66869  1.66869         osd.34
> 35 hdd10   1.66869  1.66869         osd.35
> 36 hdd10   1.66869  1.66869         osd.36
>-25        10.53297          root SSD
>-42         3.54659  3.54659     host SSD-ceph01
> 31   ssd   1.74660  1.74660         osd.31
> 40   ssd   1.79999  1.79999         osd.40
>-41         1.74660  1.74660     host SSD-ceph02
> 30   ssd   1.74660  1.74660         osd.30
>-40         1.74660  1.74660     host SSD-ceph03
> 33   ssd   1.74660  1.74660         osd.33
>-39         1.74660  1.74660     host SSD-ceph04
> 32   ssd   1.74660  1.74660         osd.32
>-38         1.74660  1.74660     host SSD-ceph05
> 39   ssd   1.74660  1.74660         osd.39
> -1       166.64085          root default
> -3        33.32817 33.32817     host ceph01
> 18   hdd  11.10939 11.10939         osd.18
> 22   hdd  11.10939 11.10939         osd.22
> 26   hdd  11.10939 11.10939         osd.26
> -5        33.32817 33.32817     host ceph02
> 19   hdd  11.10939 11.10939         osd.19
> 23   hdd  11.10939 11.10939         osd.23
> 27   hdd  11.10939 11.10939         osd.27
> -9        33.32817 33.32817     host ceph03
> 21   hdd  11.10939 11.10939         osd.21
> 25   hdd  11.10939 11.10939         osd.25
> 29   hdd  11.10939 11.10939         osd.29
> -7        33.32817 33.32817     host ceph04
> 20   hdd  11.10939 11.10939         osd.20
> 24   hdd  11.10939 11.10939         osd.24
> 28   hdd  11.10939 11.10939         osd.28
>-11        33.32817 33.32817     host ceph05
>  9   hdd  11.10939 11.10939         osd.9
> 37   hdd  11.10939 11.10939         osd.37
> 38   hdd  11.10939 11.10939         osd.38
>
>------------- Crush rules
>
>[
>    {
>        "rule_id": 0,
>        "rule_name": "replicated_rule",
>        "ruleset": 0,
>        "type": 1,
>        "min_size": 1,
>        "max_size": 10,
>        "steps": [
>            {
>                "op": "take",
>                "item": -1,
>                "item_name": "default"
>            },
>            {
>                "op": "chooseleaf_firstn",
>                "num": 0,
>                "type": "host"
>            },
>            {
>                "op": "emit"
>            }
>        ]
>    },
>    {
>        "rule_id": 1,
>        "rule_name": "replicated-ssd",
>        "ruleset": 1,
>        "type": 1,
>        "min_size": 1,
>        "max_size": 10,
>        "steps": [
>            {
>                "op": "take",
>                "item": -32,
>                "item_name": "SSD~ssd"
>            },
>            {
>                "op": "chooseleaf_firstn",
>                "num": 0,
>                "type": "host"
>            },
>            {
>                "op": "emit"
>            }
>        ]
>    },
>    {
>        "rule_id": 2,
>        "rule_name": "replicated-hdd10",
>        "ruleset": 2,
>        "type": 1,
>        "min_size": 1,
>        "max_size": 10,
>        "steps": [
>            {
>                "op": "take",
>                "item": -27,
>                "item_name": "HDD10~hdd10"
>            },
>            {
>                "op": "chooseleaf_firstn",
>                "num": 0,
>                "type": "host"
>            },
>            {
>                "op": "emit"
>            }
>        ]
>    },
>    {
>        "rule_id": 3,
>        "rule_name": "replicated-hdd72",
>        "ruleset": 3,
>        "type": 1,
>        "min_size": 1,
>        "max_size": 10,
>        "steps": [
>            {
>                "op": "take",
>                "item": -18,
>                "item_name": "default~hdd"
>            },
>            {
>                "op": "chooseleaf_firstn",
>                "num": 0,
>                "type": "host"
>            },
>            {
>                "op": "emit"
>            }
>        ]
>    },
>	{
>        "rule_id": 4,
>        "rule_name": "ec-ssd",
>        "ruleset": 4,
>        "type": 3,
>        "min_size": 3,
>        "max_size": 5,
>        "steps": [
>            {
>                "op": "set_chooseleaf_tries",
>                "num": 5
>            },
>            {
>                "op": "set_choose_tries",
>                "num": 100
>            },
>            {
>                "op": "take",
>                "item": -32,
>                "item_name": "SSD~ssd"
>            },
>            {
>                "op": "chooseleaf_indep",
>                "num": 0,
>                "type": "host"
>            },
>            {
>                "op": "emit"
>            }
>        ]
>    },
>	{
>        "rule_id": 5,
>        "rule_name": "ec-hdd10",
>        "ruleset": 5,
>        "type": 3,
>        "min_size": 3,
>        "max_size": 5,
>        "steps": [
>            {
>                "op": "set_chooseleaf_tries",
>                "num": 5
>            },
>            {
>                "op": "set_choose_tries",
>                "num": 100
>            },
>            {
>                "op": "take",
>                "item": -27,
>                "item_name": "HDD10~hdd10"
>            },
>            {
>                "op": "chooseleaf_indep",
>                "num": 0,
>                "type": "host"
>            },
>            {
>                "op": "emit"
>            }
>        ]
>    },
>{
>        "rule_id": 6,
>        "rule_name": "ec-hdd72",
>        "ruleset": 6,
>        "type": 3,
>        "min_size": 3,
>        "max_size": 5,
>        "steps": [
>            {
>                "op": "set_chooseleaf_tries",
>                "num": 5
>            },
>            {
>                "op": "set_choose_tries",
>                "num": 100
>            },
>            {
>                "op": "take",
>                "item": -18,
>                "item_name": "default~hdd"
>            },
>            {
>                "op": "chooseleaf_indep",
>                "num": 0,
>                "type": "host"
>            },
>            {
>                "op": "emit"
>            }
>        ]
>    }
>	{
>        "rule_id": 6,
>        "rule_name": "ec-hdd72",
>        "ruleset": 6,
>        "type": 3,
>        "min_size": 3,
>        "max_size": 5,
>        "steps": [
>            {
>                "op": "set_chooseleaf_tries",
>                "num": 5
>            },
>            {
>                "op": "set_choose_tries",
>                "num": 100
>            },
>            {
>                "op": "take",
>                "item": -18,
>                "item_name": "default~hdd"
>            },
>            {
>                "op": "chooseleaf_indep",
>                "num": 0,
>                "type": "host"
>            },
>            {
>                "op": "emit"
>            }
>        ]
>    }
>]
>
>------------- Crush Map
>
>  cluster:
>    id:     8700e000-d2ac-4393-8380-1cf4779166b5
>    health: HEALTH_WARN
>            nodeep-scrub flag(s) set
>            71 pgs not deep-scrubbed in time
>
>  services:
>    mon: 5 daemons, quorum ceph01,ceph02,ceph03,ceph05,ceph04 (age 26h)
>mgr: ceph04(active, since 2d), standbys: ceph03, ceph05, ceph02, ceph01
>    mds: cephfs:1 {0=ceph02=up:active} 4 up:standby
>    osd: 41 osds: 41 up (since 2h), 41 in (since 2h)
>         flags nodeep-scrub
>rgw: 5 daemons active (ceph01.rgw0, ceph02.rgw0, ceph03.rgw0,
>ceph04.rgw0, ceph05.rgw0)
>
>  data:
>    pools:   23 pools, 1576 pgs
>    objects: 11.55M objects, 44 TiB
>    usage:   84 TiB used, 126 TiB / 210 TiB avail
>    pgs:     1576 active+clean
>
>  io:
>    client:   154 MiB/s rd, 51 MiB/s wr, 2.02k op/s rd, 1.88k op/s wr
>cache:    23 MiB/s flush, 144 MiB/s evict, 44 op/s promote, 1 PGs
>flushing
>_______________________________________________
>ceph-users mailing list -- ceph-users@xxxxxxx
>To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux