All, I have 4 nodes each with 5 OSDs. I recently upgraded to infernalis via ceph-deploy. It went mostly ok but one of my nodes cannot mount any OSDs. When I look at the status of the service, I see: Apr 07 12:22:06 borg02 ceph-osd[3868]: 9: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x27a) [0x7f0086ef02aa] Apr 07 12:22:06 borg02 ceph-osd[3868]: 10: (OSDService::get_map(unsigned int)+0x3d) [0x7f008699cecd] Apr 07 12:22:06 borg02 ceph-osd[3868]: 11: (OSD::init()+0xe12) [0x7f0086951682] Apr 07 12:22:06 borg02 ceph-osd[3868]: 12: (main()+0x2998) [0x7f00868d41c8] Apr 07 12:22:06 borg02 ceph-osd[3868]: 13: (__libc_start_main()+0xf5) [0x7f0083756b15] Apr 07 12:22:06 borg02 ceph-osd[3868]: 14: (()+0x2f0959) [0x7f0086904959] Apr 07 12:22:06 borg02 ceph-osd[3868]: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this. Apr 07 12:22:06 borg02 systemd[1]: ceph-osd@1.service: main process exited, code=killed, status=6/ABRT Apr 07 12:22:06 borg02 systemd[1]: Unit ceph-osd@1.service entered failed state. Apr 07 12:22:06 borg02 systemd[1]: ceph-osd@1.service failed. I see a mention of a bug but it was closed with nothing added (http://tracker.ceph.com/issues/14021) Anyone have any ideas on this? I cannot seem to get these OSDs up at all. That node is also a monitor, which seems to be fine. Brian Andrus |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com