Hi All I am recently testing a new ceph cluster with SSD as journal. ceph -v ceph version 10.2.7 cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.4 Beta (Maipo) I followed http://ceph.com/geen-categorie/ceph-recover-osds-after-ssd-journal-failure/ to replace the journal drive. (for testing) All the other ceph service are running but the osd@0 got crashed. #systemctl -l status ceph-osd@0 ● ceph-osd@0.service - Ceph object storage daemon Loaded: loaded (/usr/lib/systemd/system/ceph-osd@.service; enabled-runtime; vendor preset: disabled) Active: activating (auto-restart) (Result: signal) since Thu 2017-06-22 15:44:04 EDT; 1s ago Process: 9580 ExecStart=/usr/bin/ceph-osd -f --cluster ${CLUSTER} --id %i --setuser ceph --setgroup ceph (code=killed, signal=ABRT) Process: 9535 ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh --cluster ${CLUSTER} --id %i (code=exited, status=0/SUCCESS) Main PID: 9580 (code=killed, signal=ABRT) Jun 22 15:44:04 tinsfsceph01.abc.ca systemd[1]: Unit ceph-osd@0.service entered failed state. Jun 22 15:44:04 tinsfsceph01.abc.ca systemd[1]: ceph-osd@0.service failed. Log file shows: --- begin dump of recent events --- 0> 2017-06-22 15:45:45.396425 7f4df5030800 -1 *** Caught signal (Aborted) ** in thread 7f4df5030800 thread_name:ceph-osd ceph version 10.2.7 (50e863e0f4bc8f4b9e31156de690d765af245185) 1: (()+0x91d8ea) [0x561eda3988ea] 2: (()+0xf5e0) [0x7f4df377d5e0] 3: (gsignal()+0x37) [0x7f4df1d3c1f7] 4: (abort()+0x148) [0x7f4df1d3d8e8] 5: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x267) [0x561eda4962e7] 6: (()+0x30640e) [0x561ed9d8140e] 7: (FileJournal::~FileJournal()+0x24a) [0x561eda17d7ca] 8: (JournalingObjectStore::journal_replay(unsigned long)+0xff2) [0x561eda18cc52] 9: (FileStore::mount()+0x3cd6) [0x561eda163576] 10: (OSD::init()+0x27d) [0x561ed9e21a1d] 11: (main()+0x2c55) [0x561ed9d86dc5] 12: (__libc_start_main()+0xf5) [0x7f4df1d28c05] 13: (()+0x3561e7) [0x561ed9dd11e7] NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this. Any help? Thanks Alex _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com