12.2.2, recently upgraded from 10.2.9. But I didn't expirience any issues after upgrade, in fact it went as smooth as it was possible.
Orphaned OSDs appeared after some combination of prepare/purge manipulations.
Anyway, it's not that big of a deal as this was just a cosmetic issue and had no real impact. The fix was also quite easy to figure out.
2017-12-20 4:03 GMT+03:00 Brad Hubbard <bhubbard@xxxxxxxxxx>:
Version?
See http://tracker.ceph.com/issues/22346 for a (limited) explanation.
> ______________________________
On Tue, Dec 19, 2017 at 6:35 PM, Vladimir Prokofev <v@xxxxxxxxxxx> wrote:
> Took a little walk and figured it out.
> I just added a dummy osd.20 with weight 0.000 in my CRUSH map and set it.
> This alone was enough for my cluster to assume that only this osd.20 was
> orphant - others disappeared.
> Then I just did
> $ ceph osd crush remove osd.20
> and now my cluster has no orphaned OSDs.
> Case closed.
>
> 2017-12-19 10:39 GMT+03:00 Vladimir Prokofev <v@xxxxxxxxxxx>:
>>
>> Hello.
>>
>> After some furious "ceph-deploy osd prepare/osd zap" cycles to figure out
>> a correct command for ceph-deploy to create a bluestore HDD with wal/db SSD,
>> I now have orphant OSDs, which are nowhere to be found in CRUSH map!
>>
>> $ ceph health detail
>> HEALTH_WARN 4 osds exist in the crush map but not in the osdmap
>> ....
>> OSD_ORPHAN 4 osds exist in the crush map but not in the osdmap
>> osd.20 exists in crush map but not in osdmap
>> osd.30 exists in crush map but not in osdmap
>> osd.31 exists in crush map but not in osdmap
>> osd.32 exists in crush map but not in osdmap
>>
>> $ ceph osd crush remove osd.30
>> device 'osd.30' does not appear in the crush map
>> $ ceph osd crush remove 30
>> device '30' does not appear in the crush map
>>
>> If I get CRUSH map with
>> $ ceph osd getcrushmap -o crm
>> $ crushtool -d crm -o crm.d
>> I don't see any mentioning of those OSDs there either.
>>
>> I don't see this affecting my cluster in any way(yet), so as for now this
>> is a cosmetic issue.
>> But I'm worried it may somehow affect it in the future(not too much, as I
>> don't really see this happening), and what's worse, that cluster will not
>> return to "healty" state after it completes remapping/fixing degraded PGs.
>>
>> Any ideas how to fix this?
>
>
>
_________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
>
--
Cheers,
Brad
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com