Sage says a whole bunch of fixes for this have gone in since both then and 12.1.2. We should be pushing out a final 12.1.3 today for people to test on; can you try that and report back once it's out?
-Greg
On Thu, Aug 10, 2017 at 8:32 AM Sean Purdy <s.purdy@xxxxxxxxxxxxxxxx> wrote:
Luminous 12.1.1 rc
Our OSD osd.8 failed. So we removed that.
We added a new disk and did:
$ ceph-deploy osd create --dmcrypt --bluestore store02:/dev/sdd
That worked, created osd.18, OSD has data.
However, mgr output at http://localhost:7000/servers showed
osd.18 under a blank hostname and not e.g. on the node we attached it to.
But it is working. "ceph osd tree" looks OK
The problem I see is:
When I do "ceph osd crush tree" I see the items list under the name:default~hdd tree:
device_class:hdd
name:store02~hdd
type:host
but my new drive is missing under this name - there are 5 OSDs, not 6.
*However*, if I look further down under the name:default tree
device_class:""
name:store02
type:host
I see all devices I am expecting, including osd.18
Is this something to worry about? Or is there something needs fixing? Health is warning for scrubbing reasons.
Output of related commands below.
Thanks for any help,
Sean Purdy
$ sudo ceph osd tree
ID CLASS WEIGHT TYPE NAME UP/DOWN REWEIGHT PRI-AFF
-1 32.73651 root default
-3 10.91217 host store01
0 hdd 1.81870 osd.0 up 1.00000 1.00000
5 hdd 1.81870 osd.5 up 1.00000 1.00000
6 hdd 1.81870 osd.6 up 1.00000 1.00000
9 hdd 1.81870 osd.9 up 1.00000 1.00000
12 hdd 1.81870 osd.12 up 1.00000 1.00000
15 hdd 1.81870 osd.15 up 1.00000 1.00000
-5 10.91217 host store02
1 hdd 1.81870 osd.1 up 1.00000 1.00000
7 hdd 1.81870 osd.7 up 1.00000 1.00000
10 hdd 1.81870 osd.10 up 1.00000 1.00000
13 hdd 1.81870 osd.13 up 1.00000 1.00000
16 hdd 1.81870 osd.16 up 1.00000 1.00000
18 hdd 1.81870 osd.18 up 1.00000 1.00000
-7 10.91217 host store03
2 hdd 1.81870 osd.2 up 1.00000 1.00000
3 hdd 1.81870 osd.3 up 1.00000 1.00000
4 hdd 1.81870 osd.4 up 1.00000 1.00000
11 hdd 1.81870 osd.11 up 1.00000 1.00000
14 hdd 1.81870 osd.14 up 1.00000 1.00000
17 hdd 1.81870 osd.17 up 1.00000 1.00000
$ sudo ceph osd crush tree
[
{
"id": -8,
"device_class": "hdd",
"name": "default~hdd",
"type": "root",
"type_id": 10,
"items": [
{
"id": -2,
"device_class": "hdd",
"name": "store01~hdd",
"type": "host",
"type_id": 1,
"items": [
{
"id": 0,
"device_class": "hdd",
"name": "osd.0",
"type": "osd",
"type_id": 0,
"crush_weight": 1.818695,
"depth": 2
},
{
"id": 5,
"device_class": "hdd",
"name": "osd.5",
"type": "osd",
"type_id": 0,
"crush_weight": 1.818695,
"depth": 2
},
{
"id": 6,
"device_class": "hdd",
"name": "osd.6",
"type": "osd",
"type_id": 0,
"crush_weight": 1.818695,
"depth": 2
},
{
"id": 9,
"device_class": "hdd",
"name": "osd.9",
"type": "osd",
"type_id": 0,
"crush_weight": 1.818695,
"depth": 2
},
{
"id": 12,
"device_class": "hdd",
"name": "osd.12",
"type": "osd",
"type_id": 0,
"crush_weight": 1.818695,
"depth": 2
},
{
"id": 15,
"device_class": "hdd",
"name": "osd.15",
"type": "osd",
"type_id": 0,
"crush_weight": 1.818695,
"depth": 2
}
]
},
{
"id": -4,
"device_class": "hdd",
"name": "store02~hdd",
"type": "host",
"type_id": 1,
"items": [
{
"id": 1,
"device_class": "hdd",
"name": "osd.1",
"type": "osd",
"type_id": 0,
"crush_weight": 1.818695,
"depth": 2
},
{
"id": 7,
"device_class": "hdd",
"name": "osd.7",
"type": "osd",
"type_id": 0,
"crush_weight": 1.818695,
"depth": 2
},
{
"id": 10,
"device_class": "hdd",
"name": "osd.10",
"type": "osd",
"type_id": 0,
"crush_weight": 1.818695,
"depth": 2
},
{
"id": 13,
"device_class": "hdd",
"name": "osd.13",
"type": "osd",
"type_id": 0,
"crush_weight": 1.818695,
"depth": 2
},
{
"id": 16,
"device_class": "hdd",
"name": "osd.16",
"type": "osd",
"type_id": 0,
"crush_weight": 1.818695,
"depth": 2
}
]
},
{
"id": -6,
"device_class": "hdd",
"name": "store03~hdd",
"type": "host",
"type_id": 1,
"items": [
{
"id": 2,
"device_class": "hdd",
"name": "osd.2",
"type": "osd",
"type_id": 0,
"crush_weight": 1.818695,
"depth": 2
},
{
"id": 3,
"device_class": "hdd",
"name": "osd.3",
"type": "osd",
"type_id": 0,
"crush_weight": 1.818695,
"depth": 2
},
{
"id": 4,
"device_class": "hdd",
"name": "osd.4",
"type": "osd",
"type_id": 0,
"crush_weight": 1.818695,
"depth": 2
},
{
"id": 11,
"device_class": "hdd",
"name": "osd.11",
"type": "osd",
"type_id": 0,
"crush_weight": 1.818695,
"depth": 2
},
{
"id": 14,
"device_class": "hdd",
"name": "osd.14",
"type": "osd",
"type_id": 0,
"crush_weight": 1.818695,
"depth": 2
},
{
"id": 17,
"device_class": "hdd",
"name": "osd.17",
"type": "osd",
"type_id": 0,
"crush_weight": 1.818695,
"depth": 2
}
]
}
]
},
{
"id": -1,
"device_class": "",
"name": "default",
"type": "root",
"type_id": 10,
"items": [
{
"id": -3,
"device_class": "",
"name": "store01",
"type": "host",
"type_id": 1,
"items": [
{
"id": 0,
"device_class": "hdd",
"name": "osd.0",
"type": "osd",
"type_id": 0,
"crush_weight": 1.818695,
"depth": 2
},
{
"id": 5,
"device_class": "hdd",
"name": "osd.5",
"type": "osd",
"type_id": 0,
"crush_weight": 1.818695,
"depth": 2
},
{
"id": 6,
"device_class": "hdd",
"name": "osd.6",
"type": "osd",
"type_id": 0,
"crush_weight": 1.818695,
"depth": 2
},
{
"id": 9,
"device_class": "hdd",
"name": "osd.9",
"type": "osd",
"type_id": 0,
"crush_weight": 1.818695,
"depth": 2
},
{
"id": 12,
"device_class": "hdd",
"name": "osd.12",
"type": "osd",
"type_id": 0,
"crush_weight": 1.818695,
"depth": 2
},
{
"id": 15,
"device_class": "hdd",
"name": "osd.15",
"type": "osd",
"type_id": 0,
"crush_weight": 1.818695,
"depth": 2
}
]
},
{
"id": -5,
"device_class": "",
"name": "store02",
"type": "host",
"type_id": 1,
"items": [
{
"id": 1,
"device_class": "hdd",
"name": "osd.1",
"type": "osd",
"type_id": 0,
"crush_weight": 1.818695,
"depth": 2
},
{
"id": 7,
"device_class": "hdd",
"name": "osd.7",
"type": "osd",
"type_id": 0,
"crush_weight": 1.818695,
"depth": 2
},
{
"id": 10,
"device_class": "hdd",
"name": "osd.10",
"type": "osd",
"type_id": 0,
"crush_weight": 1.818695,
"depth": 2
},
{
"id": 13,
"device_class": "hdd",
"name": "osd.13",
"type": "osd",
"type_id": 0,
"crush_weight": 1.818695,
"depth": 2
},
{
"id": 16,
"device_class": "hdd",
"name": "osd.16",
"type": "osd",
"type_id": 0,
"crush_weight": 1.818695,
"depth": 2
},
{
"id": 18,
"device_class": "hdd",
"name": "osd.18",
"type": "osd",
"type_id": 0,
"crush_weight": 1.818695,
"depth": 2
}
]
},
{
"id": -7,
"device_class": "",
"name": "store03",
"type": "host",
"type_id": 1,
"items": [
{
"id": 2,
"device_class": "hdd",
"name": "osd.2",
"type": "osd",
"type_id": 0,
"crush_weight": 1.818695,
"depth": 2
},
{
"id": 3,
"device_class": "hdd",
"name": "osd.3",
"type": "osd",
"type_id": 0,
"crush_weight": 1.818695,
"depth": 2
},
{
"id": 4,
"device_class": "hdd",
"name": "osd.4",
"type": "osd",
"type_id": 0,
"crush_weight": 1.818695,
"depth": 2
},
{
"id": 11,
"device_class": "hdd",
"name": "osd.11",
"type": "osd",
"type_id": 0,
"crush_weight": 1.818695,
"depth": 2
},
{
"id": 14,
"device_class": "hdd",
"name": "osd.14",
"type": "osd",
"type_id": 0,
"crush_weight": 1.818695,
"depth": 2
},
{
"id": 17,
"device_class": "hdd",
"name": "osd.17",
"type": "osd",
"type_id": 0,
"crush_weight": 1.818695,
"depth": 2
}
]
}
]
}
]
$ sudo ceph status
cluster:
id: a8c34694-a172-4418-a7dd-dd8a642eb545
health: HEALTH_WARN
32 pgs not deep-scrubbed for 86400
174 pgs not scrubbed for 86400
services:
mon: 3 daemons, quorum store01,store02,store03
mgr: store01(active)
osd: 18 osds: 18 up, 18 in
rgw: 3daemonss active
data:
pools: 6 pools, 672 pgs
objects: 4018k objects, 303 GB
usage: 1482 GB used, 32040 GB / 33523 GB avail
pgs: 672 active+clean
io:
client: 72099 B/s rd, 542 kB/s wr, 70 op/s rd, 102 op/s wr
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com