Hello.
After some furious "ceph-deploy osd prepare/osd zap" cycles to figure out a correct command for ceph-deploy to create a bluestore HDD with wal/db SSD, I now have orphant OSDs, which are nowhere to be found in CRUSH map!
$ ceph health detail
HEALTH_WARN 4 osds exist in the crush map but not in the osdmap
....
OSD_ORPHAN 4 osds exist in the crush map but not in the osdmap
osd.20 exists in crush map but not in osdmap
osd.30 exists in crush map but not in osdmap
osd.31 exists in crush map but not in osdmap
osd.32 exists in crush map but not in osdmap
$ ceph osd crush remove osd.30
device 'osd.30' does not appear in the crush map
$ ceph osd crush remove 30
device '30' does not appear in the crush map
If I get CRUSH map with
$ ceph osd getcrushmap -o crm
$ crushtool -d crm -o crm.d
I don't see any mentioning of those OSDs there either.
I don't see this affecting my cluster in any way(yet), so as for now this is a cosmetic issue.
But I'm worried it may somehow affect it in the future(not too much, as I don't really see this happening), and what's worse, that cluster will not return to "healty" state after it completes remapping/fixing degraded PGs.
Any ideas how to fix this?
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com