Hi, please if someone know how to help, I have an HDD pool in mycluster and after rebooting one server, my osds has started to crash. This pool is a backup pool and have OSD as failure domain with an size of 2. After rebooting one server, My osds started to crash, and the thing is only getting worse. I have then tried to run ceph-bluestore-tool repair and I receive what I think is the same error that shows on the osd logs: [root@cwvh13 ~]# ceph-bluestore-tool repair --path /var/lib/ceph/osd/ceph-81 --log-level 10 /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/gigantic/release/14.2.16/rpm/el7/BUILD/ceph-14.2.16/src/os/bluestore/Allocator.cc: In function 'virtual Allocator::SocketHook::~SocketHook()' thread 7f6467ffcec0 time 2021-03-11 12:13:12.121766 /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/gigantic/release/14.2.16/rpm/el7/BUILD/ceph-14.2.16/src/os/bluestore/Allocator.cc: 53: FAILED ceph_assert(r == 0) ceph version 14.2.16 (762032d6f509d5e7ee7dc008d80fe9c87086603c) nautilus (stable) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x14a) [0x7f645e1a7b27] 2: (()+0x25ccef) [0x7f645e1a7cef] 3: (()+0x3cd57f) [0x5642e85c457f] 4: (HybridAllocator::~HybridAllocator()+0x17) [0x5642e85f3f37] 5: (BlueStore::_close_alloc()+0x42) [0x5642e84379d2] 6: (BlueStore::_close_db_and_around(bool)+0x2f8) [0x5642e84bbac8] 7: (BlueStore::_fsck(BlueStore::FSCKDepth, bool)+0x293) [0x5642e84bbf13] 8: (main()+0x13cc) [0x5642e83caaec] 9: (__libc_start_main()+0xf5) [0x7f645ae24555] 10: (()+0x1fae9f) [0x5642e83f1e9f] _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx