HI!
Kind regards,
Kevin.
Currently I am deploying a small cluster with two nodes. I installed ceph jewel on all nodes and made a basic deployment.
After "ceph osd create..." I am now getting "Failed to start Ceph disk activation: /dev/dm-18" on boot. All 28 OSDs were never active.
This server has a 14 disk JBOD with 4x fiber using multipath (4x active multibus). We have two servers.
OS: Latest CentOS 7
[root@osd01 ~]# ceph -v
ceph version 10.2.7 (50e863e0f4bc8f4b9e31156de690d765af245185)
Command run:
ceph-deploy osd create osd01.example.local:/dev/mapper/mpatha:/dev/disk/by-partlabel/journal01
There is no error in journalctl, just that the unit failed:
May 16 16:47:33 osd01.example.local systemd[1]: Failed to start Ceph disk activation: /dev/dm-27.
May 16 16:47:33 osd01.example.local systemd[1]: ceph-disk@dev-dm\x2d27.service: main process exited, code=exited, status=124/n/a
May 16 16:47:33 osd01.example.local systemd[1]: ceph-disk@dev-dm\x2d24.service failed.
May 16 16:47:33 osd01.example.local systemd[1]: Unit ceph-disk@dev-dm\x2d24.service entered failed state.
[root@osd01 ~]# gdisk -l /dev/mapper/mpatha
GPT fdisk (gdisk) version 0.8.6
Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present
Found valid GPT with protective MBR; using GPT.
Disk /dev/mapper/mpatha: 976642095 sectors, 465.7 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): DEF0B782-3B7F-4AF5-A0CB-9E2B96C40B13
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 976642061
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)
Number Start (sector) End (sector) Size Code Name
1 2048 976642061 465.7 GiB FFFF ceph data
I had problems with multipath in the past when running ceph but this time I was unable to solve the problem.
Any ideas?
Kind regards,
Kevin.
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com