Unable to start a new osd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

ceph version 0.94.2 (5fb85614ca8f354284c713a2f9c610860720bbf3)
3.19.0-25-generic #26~14.04.1-Ubuntu

I replaced a broken osd device and now I'm unable to get the new one to join the cluster. The osd starts and it manages to talk to monitors but status never changes from down to up. I have tried to recreate it by removing keys and recreating them, tried by removing the osd completely before creating (ceph osd crush remove and ceph osd rm). I tried to create a completely new osd also with same results. Also I have another similar host with exactly the same issue.

The osd log:

2015-08-25 12:59:48.505648 7f297b68d900  0 ceph version 0.94.2 (5fb85614ca8f354284c713a2f9c610860720bbf3), process ceph-osd, pid 5494
2015-08-25 12:59:48.530992 7f297b68d900  0 filestore(/var/lib/ceph/osd/ceph-103) backend xfs (magic 0x58465342)
2015-08-25 12:59:48.536257 7f297b68d900  0 genericfilestorebackend(/var/lib/ceph/osd/ceph-103) detect_features: FIEMAP ioctl is supported and appears to work
2015-08-25 12:59:48.536274 7f297b68d900  0 genericfilestorebackend(/var/lib/ceph/osd/ceph-103) detect_features: FIEMAP ioctl is disabled via 'filestore fiemap' config option
2015-08-25 12:59:48.559986 7f297b68d900  0 genericfilestorebackend(/var/lib/ceph/osd/ceph-103) detect_features: syncfs(2) syscall fully supported (by glibc and kernel)
2015-08-25 12:59:48.560103 7f297b68d900  0 xfsfilestorebackend(/var/lib/ceph/osd/ceph-103) detect_feature: extsize is supported and kernel 3.19.0-25-generic >= 3.5
2015-08-25 12:59:48.643978 7f297b68d900  0 filestore(/var/lib/ceph/osd/ceph-103) mount: enabling WRITEAHEAD journal mode: checkpoint is not enabled
2015-08-25 12:59:48.646358 7f297b68d900 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
2015-08-25 12:59:48.646363 7f297b68d900  1 journal _open /var/lib/ceph/osd/ceph-103/journal fd 19: 4294967296 bytes, block size 4096 bytes, directio = 1, aio = 0
2015-08-25 12:59:48.658488 7f297b68d900  1 journal _open /var/lib/ceph/osd/ceph-103/journal fd 19: 4294967296 bytes, block size 4096 bytes, directio = 1, aio = 0
2015-08-25 12:59:48.805680 7f297b68d900  0 <cls> cls/hello/cls_hello.cc:271: loading cls_hello
2015-08-25 12:59:48.834871 7f297b68d900  0 osd.103 122694 crush map has features 2200130813952, adjusting msgr requires for clients
2015-08-25 12:59:48.834892 7f297b68d900  0 osd.103 122694 crush map has features 2200130813952 was 8705, adjusting msgr requires for mons
2015-08-25 12:59:48.834896 7f297b68d900  0 osd.103 122694 crush map has features 2200130813952, adjusting msgr requires for osds
2015-08-25 12:59:48.834915 7f297b68d900  0 osd.103 122694 load_pgs
2015-08-25 12:59:48.834961 7f297b68d900  0 osd.103 122694 load_pgs opened 0 pgs
2015-08-25 12:59:48.837314 7f297b68d900 -1 osd.103 122694 log_to_monitors {default=true}
2015-08-25 12:59:48.849482 7f296a2b4700  0 osd.103 122694 ignoring osdmap until we have initialized
2015-08-25 12:59:48.849547 7f296a2b4700  0 osd.103 122694 ignoring osdmap until we have initialized
2015-08-25 12:59:48.849691 7f297b68d900  0 osd.103 122694 done with init, starting boot process

What could cause this?

-- 
  Eino Tuominen
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux