Can you post your decompiled crush map, ceph status, ceph osd tree, etc? Something will allow what the extra stuff is and the easiest way to remove it.
On Tue, Jun 27, 2017, 12:12 PM Daniel K <sathackr@xxxxxxxxx> wrote:
Hi,_______________________________________________I'm extremely new to ceph and have a small 4-node/20-osd cluster.I just upgraded from kraken to luminous without much ado, except now when I run ceph status, I get a health_warn because "2 osds exist in the crush map but not in the osdmap"Googling the error message only took me to the source file on githubI tried exporting and decompiling the crushmap -- there were two osd devices named differently. The normal name would be something likedevice 0 osd.0device 1 osd.1but two were named:device 5 device5device 11 device11I had edited the crushmap in the past, so it's possible this was introduced by me.I tried changing those to match the rest, recompiling and setting the crushmap, but ceph status still complains.Any assistance would be greatly appreciated.Thanks,Dan
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com