All, I am trying to add another OSD to our cluster using ceph-deploy. This is running Jewel. I previously set up the other 12 OSDs on a fresh install using the command: ceph-deploy osd create <host>:/dev/mapper/mpath<device>:/dev/sda<partition> Those are all up and happy. On the systems /dev/sda is an SSD which I have created partitions on for journals. It seems to prepare everything fine (ceph-deploy osd prepare ceph-1-35a:/dev/mapper/mpathn:/dev/sda8), but when it comes time to activate, I am getting a Traceback: [2016-05-10 11:27:58,195][ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.2.1511 Core [2016-05-10 11:27:58,195][ceph_deploy.osd][DEBUG ] activating host ceph-1-35a disk /dev/mapper/mpathn [2016-05-10 11:27:58,195][ceph_deploy.osd][DEBUG ] will use init type: systemd [2016-05-10 11:27:58,196][ceph-1-35a][INFO ] Running command: ceph-disk -v activate --mark-init systemd --mount /dev/mapper/mpathn [2016-05-10 11:27:58,315][ceph-1-35a][WARNING] main_activate: path = /dev/mapper/mpathn [2016-05-10 11:27:58,315][ceph-1-35a][WARNING] get_dm_uuid: get_dm_uuid /dev/mapper/mpathn uuid path is /sys/dev/block/253:8/dm/uuid [2016-05-10 11:27:58,316][ceph-1-35a][WARNING] get_dm_uuid: get_dm_uuid /dev/mapper/mpathn uuid is mpath-360001ff09070e0000000000c8921000c [2016-05-10 11:27:58,316][ceph-1-35a][WARNING] [2016-05-10 11:27:58,316][ceph-1-35a][WARNING] get_dm_uuid: get_dm_uuid /dev/mapper/mpathn uuid path is /sys/dev/block/253:8/dm/uuid [2016-05-10 11:27:58,316][ceph-1-35a][WARNING] get_dm_uuid: get_dm_uuid /dev/mapper/mpathn uuid is mpath-360001ff09070e0000000000c8921000c [2016-05-10 11:27:58,316][ceph-1-35a][WARNING] [2016-05-10 11:27:58,316][ceph-1-35a][WARNING] command: Running command: /sbin/blkid -p -s TYPE -o value -- /dev/mapper/mpathn [2016-05-10 11:27:58,316][ceph-1-35a][WARNING] Traceback (most recent call last): [2016-05-10 11:27:58,316][ceph-1-35a][WARNING] File "/usr/sbin/ceph-disk", line 9, in <module> [2016-05-10 11:27:58,316][ceph-1-35a][WARNING] load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')() [2016-05-10 11:27:58,316][ceph-1-35a][WARNING] File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4964, in run [2016-05-10 11:27:58,316][ceph-1-35a][WARNING] main(sys.argv[1:]) [2016-05-10 11:27:58,317][ceph-1-35a][WARNING] File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4915, in main [2016-05-10 11:27:58,317][ceph-1-35a][WARNING] args.func(args) [2016-05-10 11:27:58,317][ceph-1-35a][WARNING] File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3269, in main_activate [2016-05-10 11:27:58,317][ceph-1-35a][WARNING] reactivate=args.reactivate, [2016-05-10 11:27:58,317][ceph-1-35a][WARNING] File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 2979, in mount_activate [2016-05-10 11:27:58,317][ceph-1-35a][WARNING] e, [2016-05-10 11:27:58,317][ceph-1-35a][WARNING] ceph_disk.main.FilesystemTypeError: Cannot discover filesystem type: device /dev/mapper/mpathn: Line is truncated: [2016-05-10 11:27:58,318][ceph-1-35a][ERROR ] RuntimeError: command returned non-zero exit status: 1 [2016-05-10 11:27:58,318][ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph-disk -v activate --mark-init systemd --mount /dev/mapper/mpathn This seems to be due to the command: /sbin/blkid -p -s TYPE -o value -- /dev/mapper/mpathn
is being run instead of: /sbin/blkid -p -s TYPE -o value -- /dev/mapper/mpathn1 Anyone have ideas on how to get these happy? Brian Andrus ITACS/Research Computing Naval Postgraduate School Monterey, California voice: 831-656-6238 |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com