I think I need help with some OSD trouble. OSD daemons on two hosts started flapping. At length, I rebooted host osd1 (osd.3), but the OSD daemon still fails to start. Upon closer inspection, ceph-disk@dev-sdb2.service is failing to start due to, "Error: /dev/sdb2 is not a block device"
This is the command I see it failing to run:
roger@osd1:~$ sudo /usr/sbin/ceph-disk --verbose activate-block /dev/sdb2
Traceback (most recent call last):
File "/usr/sbin/ceph-disk", line 9, in <module>
load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5731, in run
main(sys.argv[1:])
File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5682, in main
args.func(args)
File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5438, in <lambda>
func=lambda args: main_activate_space(name, args),
File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 4160, in main_activate_space
osd_uuid = get_space_osd_uuid(name, dev)
File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 4115, in get_space_osd_uuid
raise Error('%s is not a block device' % path)
ceph_disk.main.Error: Error: /dev/sdb2 is not a block device
osd1 environment:
$ ceph -v
ceph version 12.1.1 (f3e663a190bf2ed12c7e3cda288b9a159572c800) luminous (rc)
$ uname -r
4.4.0-83-generic
$ lsb_release -sc
xenial
Please advise.
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com