Took a little walk and figured it out.
I just added a dummy osd.20 with weight 0.000 in my CRUSH map and set it. This alone was enough for my cluster to assume that only this osd.20 was orphant - others disappeared.
Then I just did
$ ceph osd crush remove osd.20
and now my cluster has no orphaned OSDs.
Case closed.
2017-12-19 10:39 GMT+03:00 Vladimir Prokofev <v@xxxxxxxxxxx>:
Hello.After some furious "ceph-deploy osd prepare/osd zap" cycles to figure out a correct command for ceph-deploy to create a bluestore HDD with wal/db SSD, I now have orphant OSDs, which are nowhere to be found in CRUSH map!$ ceph health detailHEALTH_WARN 4 osds exist in the crush map but not in the osdmap....OSD_ORPHAN 4 osds exist in the crush map but not in the osdmaposd.20 exists in crush map but not in osdmaposd.30 exists in crush map but not in osdmaposd.31 exists in crush map but not in osdmaposd.32 exists in crush map but not in osdmap$ ceph osd crush remove osd.30device 'osd.30' does not appear in the crush map$ ceph osd crush remove 30device '30' does not appear in the crush mapIf I get CRUSH map with$ ceph osd getcrushmap -o crm$ crushtool -d crm -o crm.dI don't see any mentioning of those OSDs there either.I don't see this affecting my cluster in any way(yet), so as for now this is a cosmetic issue.But I'm worried it may somehow affect it in the future(not too much, as I don't really see this happening), and what's worse, that cluster will not return to "healty" state after it completes remapping/fixing degraded PGs.Any ideas how to fix this?
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com