We running a small Ceph cluster with two nodes. Our failureDomain is set to host to have the data replicated between the two hosts. The other night one host crashed hard and three OSDs won't recovert with either debug 2021-01-13T08:13:17.855+0000 7f9bfbd6ef40 -1 osd.23 0 OSD::init() : unable to read osd superblock debug 2021-01-13T08:13:17.855+0000 7f9bfbd6ef40 1 bluestore(/var/lib/ceph/osd/ceph-23) umount debug 2021-01-13T08:13:17.855+0000 7f9bea85a700 0 bluestore(/var/lib/ceph/osd/ceph-23) allocation stats probe 0: cnt: 0 frags: 0 size: 0 debug 2021-01-13T08:13:17.855+0000 7f9bea85a700 0 bluestore(/var/lib/ceph/osd/ceph-23) probe -1: 0, 0, 0 debug 2021-01-13T08:13:17.855+0000 7f9bea85a700 0 bluestore(/var/lib/ceph/osd/ceph-23) probe -2: 0, 0, 0 debug 2021-01-13T08:13:17.855+0000 7f9bea85a700 0 bluestore(/var/lib/ceph/osd/ceph-23) probe -4: 0, 0, 0 debug 2021-01-13T08:13:17.855+0000 7f9bea85a700 0 bluestore(/var/lib/ceph/osd/ceph-23) probe -8: 0, 0, 0 debug 2021-01-13T08:13:17.855+0000 7f9bea85a700 0 bluestore(/var/lib/ceph/osd/ceph-23) probe -16: 0, 0, 0 debug 2021-01-13T08:13:17.855+0000 7f9bea85a700 0 bluestore(/var/lib/ceph/osd/ceph-23) ------------ debug 2021-01-13T08:13:17.855+0000 7f9bfbd6ef40 4 rocksdb: [db/db_impl.cc:390] Shutdown: canceling all background work debug 2021-01-13T08:13:17.855+0000 7f9bfbd6ef40 4 rocksdb: [db/db_impl.cc:563] Shutdown complete debug 2021-01-13T08:13:17.855+0000 7f9bfbd6ef40 1 bluefs umount debug 2021-01-13T08:13:17.855+0000 7f9bfbd6ef40 1 bdev(0x557150e20700 /var/lib/ceph/osd/ceph-23/block) close debug 2021-01-13T08:13:18.167+0000 7f9bfbd6ef40 1 freelist shutdown debug 2021-01-13T08:13:18.167+0000 7f9bfbd6ef40 1 bdev(0x557150e20000 /var/lib/ceph/osd/ceph-23/block) close debug 2021-01-13T08:13:18.411+0000 7f9bfbd6ef40 -1 ** ERROR: osd init failed: (22) Invalid argument or debug -2> 2021-01-13T08:13:29.991+0000 7f402c5f9700 -1 rocksdb: submit_common error: Corruption: block checksum mismatch: expected 2795871023, got 2381104739 in db/000060.sst offset 748408 size 3819 code = 2 Rocksdb transaction: How can I delete the OSDs to get them back fully operational? Any help appreciated! /Fabian _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx