> > Thanks Igor, So I stuck the debugging up to 5 and rebooted, and suddenly the OSD's are coming back in no time again. Might this be because they were so recently rebooted? I've added the log with debug below: 2022-03-16T14:31:30.031+0000 7f739fd28f00 1 bluestore(/var/lib/ceph/osd/ceph-9) _mount path /var/lib/ceph/osd/ceph-9 2022-03-16T14:31:30.031+0000 7f739fd28f00 0 bluestore(/var/lib/ceph/osd/ceph-9) _open_db_and_around read-only:0 repair:0 2022-03-16T14:31:30.031+0000 7f739fd28f00 1 bluestore(/var/lib/ceph/osd/ceph-9) _set_cache_sizes cache_size 3221225472 meta 0.45 kv 0.45 data 0.06 2022-03-16T14:31:30.083+0000 7f739fd28f00 1 bluestore(/var/lib/ceph/osd/ceph-9) _prepare_db_environment set db_paths to db,3648715856281 db.slow,3648715856281 2022-03-16T14:31:45.884+0000 7f739fd28f00 1 bluestore(/var/lib/ceph/osd/ceph-9) _open_db opened rocksdb path db options compression=kNoCompression,max_write_buffer_number=32,min_write_buffer_number_to_merge=2,recycle_log_file_num=32,write_buffer_size=64M,compaction_readahead_size=2M 2022-03-16T14:31:45.884+0000 7f739fd28f00 1 bluestore(/var/lib/ceph/osd/ceph-9) _open_super_meta old nid_max 164243 2022-03-16T14:31:45.884+0000 7f739fd28f00 1 bluestore(/var/lib/ceph/osd/ceph-9) _open_super_meta old blobid_max 97608 2022-03-16T14:31:45.896+0000 7f739fd28f00 1 bluestore(/var/lib/ceph/osd/ceph-9) _open_super_meta freelist_type bitmap 2022-03-16T14:31:45.896+0000 7f739fd28f00 1 bluestore(/var/lib/ceph/osd/ceph-9) _open_super_meta ondisk_format 4 compat_ondisk_format 3 2022-03-16T14:31:45.896+0000 7f739fd28f00 1 bluestore(/var/lib/ceph/osd/ceph-9) _open_super_meta min_alloc_size 0x1000 2022-03-16T14:31:45.904+0000 7f739fd28f00 1 freelist init 2022-03-16T14:31:45.904+0000 7f739fd28f00 1 freelist _read_cfg 2022-03-16T14:31:45.904+0000 7f739fd28f00 1 bluestore(/var/lib/ceph/osd/ceph-9) _init_alloc opening allocation metadata 2022-03-16T14:31:50.032+0000 7f739fd28f00 1 bluestore(/var/lib/ceph/osd/ceph-9) _init_alloc loaded 3.2 TiB in 772662 extents, allocator type hybrid, capacity 0x37e3ec00000, block size 0x1000, free 0x328e41eb000, fragmentation 0.000910959 2022-03-16T14:31:50.172+0000 7f739fd28f00 1 bluestore(/var/lib/ceph/osd/ceph-9) _prepare_db_environment set db_paths to db,3648715856281 db.slow,3648715856281 2022-03-16T14:32:04.996+0000 7f739fd28f00 1 bluestore(/var/lib/ceph/osd/ceph-9) _open_db opened rocksdb path db options compression=kNoCompression,max_write_buffer_number=32,min_write_buffer_number_to_merge=2,recycle_log_file_num=32,write_buffer_size=64M,compaction_readahead_size=2M 2022-03-16T14:32:04.996+0000 7f739fd28f00 1 bluestore(/var/lib/ceph/osd/ceph-9) _upgrade_super from 4, latest 4 2022-03-16T14:32:04.996+0000 7f739fd28f00 1 bluestore(/var/lib/ceph/osd/ceph-9) _upgrade_super done 2022-03-16T14:32:05.036+0000 7f7393a19700 5 bluestore.MempoolThread(0x55b44a7d0b90) _resize_shards cache_size: 2843595409 kv_alloc: 1174405120 kv_used: 2484424 kv_onode_alloc: 184549376 kv_onode_used: 4864776 meta_alloc: 1174405120 meta_used: 20181 data_alloc: 218103808 data_used: 0 2022-03-16T14:32:05.040+0000 7f739fd28f00 0 _get_class not permitted to load sdk 2022-03-16T14:32:05.044+0000 7f739fd28f00 0 _get_class not permitted to load lua 2022-03-16T14:32:05.044+0000 7f739fd28f00 0 _get_class not permitted to load kvs _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx