Re: OSD_ORPHAN issues after jewel->luminous upgrade

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Can you dump your osd map and post it in a tracker ticket?
Or if you're not comfortable with that you can upload it with ceph-post-file and only developers will be able to see it.
-Greg

On Thu, Dec 7, 2017 at 10:36 AM Graham Allan <gta@xxxxxxx> wrote:
Just updated a fairly long-lived (originally firefly) cluster from jewel
to luminous 12.2.2.

One of the issues I see is a new health warning:

OSD_ORPHAN 3 osds exist in the crush map but not in the osdmap
     osd.2 exists in crush map but not in osdmap
     osd.14 exists in crush map but not in osdmap
     osd.19 exists in crush map but not in osdmap

Seemed reasonable enough, these low-numbered OSDs were on
long-decommissioned hardware. I thought I had removed them completely
though, and it seems I had:

> # ceph osd crush ls osd.2
> Error ENOENT: node 'osd.2' does not exist
> # ceph osd crush remove osd.2
> device 'osd.2' does not appear in the crush map

so I wonder where it's getting this warning from, and if it's erroneous,
how can I clear it?

Graham
--
Graham Allan
Minnesota Supercomputing Institute - gta@xxxxxxx
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux