Error in boot.log - Failed to start Ceph disk activation - Luminous

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, 

First of all, my version:

[root@vdicnode01 ~]# ceph -v
ceph version 12.1.1 (f3e663a190bf2ed12c7e3cda288b9a159572c800) luminous (rc)

When I boot my ceph node (I have an all in one) I get the following message in boot.log:

[FAILED] Failed to start Ceph disk activation: /dev/sdb2.
See 'systemctl status ceph-disk@dev-sdb2.service' for details.
[FAILED] Failed to start Ceph disk activation: /dev/sdb1.
See 'systemctl status ceph-disk@dev-sdb1.service' for details.

[root@vdicnode01 ~]# systemctl status ceph-disk@dev-sdb1.service
● ceph-disk@dev-sdb1.service - Ceph disk activation: /dev/sdb1
   Loaded: loaded (/usr/lib/systemd/system/ceph-disk@.service; static; vendor preset: disabled)
   Active: failed (Result: exit-code) since Thu 2017-07-27 23:37:23 CEST; 1h 52min ago
  Process: 740 ExecStart=/bin/sh -c timeout $CEPH_DISK_TIMEOUT flock /var/lock/ceph-disk-$(basename %f) /usr/sbin/ceph-disk --verbose --log-stdout trigger --sync %f (code=exited, status=1/FAILURE)
 Main PID: 740 (code=exited, status=1/FAILURE)

Jul 27 23:37:23 vdicnode01 sh[740]: main(sys.argv[1:])
Jul 27 23:37:23 vdicnode01 sh[740]: File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5682, in main
Jul 27 23:37:23 vdicnode01 sh[740]: args.func(args)
Jul 27 23:37:23 vdicnode01 sh[740]: File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4891, in main_trigger
Jul 27 23:37:23 vdicnode01 sh[740]: raise Error('return code ' + str(ret))
Jul 27 23:37:23 vdicnode01 sh[740]: ceph_disk.main.Error: Error: return code 1
Jul 27 23:37:23 vdicnode01 systemd[1]: ceph-disk@dev-sdb1.service: main process exited, code=exited, status=1/FAILURE
Jul 27 23:37:23 vdicnode01 systemd[1]: Failed to start Ceph disk activation: /dev/sdb1.
Jul 27 23:37:23 vdicnode01 systemd[1]: Unit ceph-disk@dev-sdb1.service entered failed state.
Jul 27 23:37:23 vdicnode01 systemd[1]: ceph-disk@dev-sdb1.service failed.


[root@vdicnode01 ~]# systemctl status ceph-disk@dev-sdb2.service
● ceph-disk@dev-sdb2.service - Ceph disk activation: /dev/sdb2
   Loaded: loaded (/usr/lib/systemd/system/ceph-disk@.service; static; vendor preset: disabled)
   Active: failed (Result: exit-code) since Thu 2017-07-27 23:37:23 CEST; 1h 52min ago
  Process: 744 ExecStart=/bin/sh -c timeout $CEPH_DISK_TIMEOUT flock /var/lock/ceph-disk-$(basename %f) /usr/sbin/ceph-disk --verbose --log-stdout trigger --sync %f (code=exited, status=1/FAILURE)
 Main PID: 744 (code=exited, status=1/FAILURE)

Jul 27 23:37:23 vdicnode01 sh[744]: main(sys.argv[1:])
Jul 27 23:37:23 vdicnode01 sh[744]: File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5682, in main
Jul 27 23:37:23 vdicnode01 sh[744]: args.func(args)
Jul 27 23:37:23 vdicnode01 sh[744]: File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4891, in main_trigger
Jul 27 23:37:23 vdicnode01 sh[744]: raise Error('return code ' + str(ret))
Jul 27 23:37:23 vdicnode01 sh[744]: ceph_disk.main.Error: Error: return code 1
Jul 27 23:37:23 vdicnode01 systemd[1]: ceph-disk@dev-sdb2.service: main process exited, code=exited, status=1/FAILURE
Jul 27 23:37:23 vdicnode01 systemd[1]: Failed to start Ceph disk activation: /dev/sdb2.
Jul 27 23:37:23 vdicnode01 systemd[1]: Unit ceph-disk@dev-sdb2.service entered failed state.
Jul 27 23:37:23 vdicnode01 systemd[1]: ceph-disk@dev-sdb2.service failed.

I have created an entry in /etc/fstab in order to mount journal disk automatically:

/dev/sdb1               /var/lib/ceph/osd/ceph-0   xfs  defaults,noatime  1 2

But when I boot, I get the same error message.

When I execute ceph -s osd look work perfectly:

[root@vdicnode01 ~]# ceph -s
  cluster:
    id:     61881df3-1365-4139-a586-92b5eca9cf18
    health: HEALTH_WARN
            Degraded data redundancy: 5/10 objects degraded (50.000%), 128 pgs unclean, 128 pgs degraded, 128 pgs undersized
            128 pgs not scrubbed for 86400

  services:
    mon: 1 daemons, quorum vdicnode01
    mgr: vdicnode01(active)
    osd: 1 osds: 1 up, 1 in

  data:
    pools:   1 pools, 128 pgs
    objects: 5 objects, 1349 bytes
    usage:   1073 MB used, 39785 MB / 40858 MB avail
    pgs:     5/10 objects degraded (50.000%)
             128 active+undersized+degraded


¿Anybody has experienced the same issue?

Thanks a lot.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux