OSD_ORPHAN issues after jewel->luminous upgrade

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Just updated a fairly long-lived (originally firefly) cluster from jewel to luminous 12.2.2.

One of the issues I see is a new health warning:

OSD_ORPHAN 3 osds exist in the crush map but not in the osdmap
    osd.2 exists in crush map but not in osdmap
    osd.14 exists in crush map but not in osdmap
    osd.19 exists in crush map but not in osdmap

Seemed reasonable enough, these low-numbered OSDs were on long-decommissioned hardware. I thought I had removed them completely though, and it seems I had:

# ceph osd crush ls osd.2
Error ENOENT: node 'osd.2' does not exist
# ceph osd crush remove osd.2
device 'osd.2' does not appear in the crush map

so I wonder where it's getting this warning from, and if it's erroneous, how can I clear it?

Graham
--
Graham Allan
Minnesota Supercomputing Institute - gta@xxxxxxx
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux