osds exist in the crush map but not in the osdmap after kraken > luminous rc1 upgrade

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I'm extremely new to ceph and have a small 4-node/20-osd cluster.

I just upgraded from kraken to luminous without much ado, except now when I run ceph status, I get a health_warn because "2 osds exist in the crush map but not in the osdmap"

Googling the error message only took me to the source file on github

I tried exporting and decompiling  the crushmap -- there were two osd devices named differently. The normal name would be something like

device 0 osd.0
device 1 osd.1

but two were named:

device 5 device5
device 11 device11

I had edited the crushmap in the past, so it's possible this was introduced by me.

I tried changing those to match the rest, recompiling and setting the crushmap, but ceph status still complains.

Any assistance would be greatly appreciated.

Thanks,
Dan



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux