I figured that was the problem. Glad you got it sorted out.
On Tue, Jun 27, 2017, 3:00 PM Daniel K <sathackr@xxxxxxxxx> wrote:
Well that was simple.In the process of preparing the decompiled crush map, ceph status, ceph osd tree for posting I noticed that those two OSDs -- 5 & 11 didn't exist. Which explains it. I removed them from the crushmap and all is well now.Nothing changed in the config from kraken to luminous, so I guess kraken just didn't have a health check for that problem.Thanks for the help!DanOn Tue, Jun 27, 2017 at 2:18 PM, David Turner <drakonstein@xxxxxxxxx> wrote:Can you post your decompiled crush map, ceph status, ceph osd tree, etc? Something will allow what the extra stuff is and the easiest way to remove it.
On Tue, Jun 27, 2017, 12:12 PM Daniel K <sathackr@xxxxxxxxx> wrote:_______________________________________________Hi,I'm extremely new to ceph and have a small 4-node/20-osd cluster.I just upgraded from kraken to luminous without much ado, except now when I run ceph status, I get a health_warn because "2 osds exist in the crush map but not in the osdmap"Googling the error message only took me to the source file on githubI tried exporting and decompiling the crushmap -- there were two osd devices named differently. The normal name would be something likedevice 0 osd.0device 1 osd.1but two were named:device 5 device5device 11 device11I had edited the crushmap in the past, so it's possible this was introduced by me.I tried changing those to match the rest, recompiling and setting the crushmap, but ceph status still complains.Any assistance would be greatly appreciated.Thanks,Dan
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com