Re: Failed to start Ceph disk activation: /dev/dm-18

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

seems that I found the cause. The disk array was used for ZFS before and was not wiped.
I zapped the disks with sgdisk and via ceph but "zfs_member" was still somewhere on the disk.
Wiping the disk (wipefs -a -f /dev/mapper/mpatha), "ceph osd create --zap-disk" twice until entry in "df"  and reboot fixed it.

Then OSDs were failing again. Cause: IPv6 DAD on bond-interface. Disabled via sysctl.
Reboot and voila, cluster immediately online.

Kind regards,
Kevin.

2017-05-16 16:59 GMT+02:00 Kevin Olbrich <ko@xxxxxxx>:
HI!

Currently I am deploying a small cluster with two nodes. I installed ceph jewel on all nodes and made a basic deployment.
After "ceph osd create..." I am now getting "Failed to start Ceph disk activation: /dev/dm-18" on boot. All 28 OSDs were never active.
This server has a 14 disk JBOD with 4x fiber using multipath (4x active multibus). We have two servers.

OS: Latest CentOS 7

[root@osd01 ~]# ceph -v
ceph version 10.2.7 (50e863e0f4bc8f4b9e31156de690d765af245185)

Command run:
ceph-deploy osd create osd01.example.local:/dev/mapper/mpatha:/dev/disk/by-partlabel/journal01

There is no error in journalctl, just that the unit failed:
May 16 16:47:33 osd01.example.local systemd[1]: Failed to start Ceph disk activation: /dev/dm-27.
May 16 16:47:33 osd01.example.local systemd[1]: ceph-disk@dev-dm\x2d27.service: main process exited, code=exited, status=124/n/a
May 16 16:47:33 osd01.example.local systemd[1]: ceph-disk@dev-dm\x2d24.service failed.
May 16 16:47:33 osd01.example.local systemd[1]: Unit ceph-disk@dev-dm\x2d24.service entered failed state.

[root@osd01 ~]# gdisk -l /dev/mapper/mpatha
GPT fdisk (gdisk) version 0.8.6
Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present
Found valid GPT with protective MBR; using GPT.
Disk /dev/mapper/mpatha: 976642095 sectors, 465.7 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): DEF0B782-3B7F-4AF5-A0CB-9E2B96C40B13
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 976642061
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)
Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048       976642061   465.7 GiB   FFFF  ceph data

I had problems with multipath in the past when running ceph but this time I was unable to solve the problem.
Any ideas? 

Kind regards,
Kevin.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux