Hi, Getting an error while adding a new node/OSD with bluestore OSDs to the cluster. The OSD is added without any host and is down, tried to bring it up didn't work. The same method to add in other clusters doesn't have any issue. Any idea what the problem is? Ceph Version: ceph version 12.2.11 (26dc3775efc7bb286a1d6d66faee0ba30ea23eee) luminous (stable) Ceph Health: OK 2023-10-25 20:40:40.867878 7f1f478cde40 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1698266440867866, "job": 1, "event": "recovery_started", "log_files": [270]} 2023-10-25 20:40:40.867883 7f1f478cde40 4 rocksdb: [/build/ceph-U0cfoi/ceph-12.2.11/src/rocksdb/db/db_impl_open.cc:482] Recovering log #270 mode 0 2023-10-25 20:40:40.867904 7f1f478cde40 4 rocksdb: [/build/ceph-U0cfoi/ceph-12.2.11/src/rocksdb/db/version_set.cc:2395] Creating manifest 272 2023-10-25 20:40:40.869553 7f1f478cde40 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1698266440869548, "job": 1, "event": "recovery_finished"} 2023-10-25 20:40:40.870924 7f1f478cde40 4 rocksdb: [/build/ceph-U0cfoi/ceph-12.2.11/src/rocksdb/db/db_impl_open.cc:1063] DB pointer 0x55c9061ba000 2023-10-25 20:40:40.870964 7f1f478cde40 1 bluestore(/var/lib/ceph/osd/ceph-721) _open_db opened rocksdb path db options compression=kNoCompression,max_write_buffer_number=4,min_write_buffer_number_to_merge=1,recycle_log_file_num=4,write_buffer_size=268435456,writable_file_max_buffer_size=0,compaction_readahead_size=2097152 2023-10-25 20:40:40.871234 7f1f478cde40 1 freelist init 2023-10-25 20:40:40.871293 7f1f478cde40 1 bluestore(/var/lib/ceph/osd/ceph-721) _open_alloc opening allocation metadata 2023-10-25 20:40:40.871314 7f1f478cde40 1 bluestore(/var/lib/ceph/osd/ceph-721) _open_alloc loaded 3.49TiB in 1 extents 2023-10-25 20:40:40.874700 7f1f478cde40 0 <cls> /build/ceph-U0cfoi/ceph-12.2.11/src/cls/cephfs/cls_cephfs.cc:197: loading cephfs 2023-10-25 20:40:40.874721 7f1f478cde40 0 _get_class not permitted to load sdk 2023-10-25 20:40:40.874955 7f1f478cde40 0 _get_class not permitted to load kvs 2023-10-25 20:40:40.875638 7f1f478cde40 0 _get_class not permitted to load lua 2023-10-25 20:40:40.875724 7f1f478cde40 0 <cls> /build/ceph-U0cfoi/ceph-12.2.11/src/cls/hello/cls_hello.cc:296: loading cls_hello 2023-10-25 20:40:40.875776 7f1f478cde40 0 osd.721 0 crush map has features 288232575208783872, adjusting msgr requires for clients 2023-10-25 20:40:40.875780 7f1f478cde40 0 osd.721 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons 2023-10-25 20:40:40.875784 7f1f478cde40 0 osd.721 0 crush map has features 288232575208783872, adjusting msgr requires for osds 2023-10-25 20:40:40.875837 7f1f478cde40 0 osd.721 0 load_pgs 2023-10-25 20:40:40.875840 7f1f478cde40 0 osd.721 0 load_pgs opened 0 pgs 2023-10-25 20:40:40.875844 7f1f478cde40 0 osd.721 0 using weightedpriority op queue with priority op cut off at 64. 2023-10-25 20:40:40.877401 7f1f478cde40 -1 osd.721 0 log_to_monitors {default=true} 2023-10-25 20:40:40.888408 7f1f478cde40 -1 osd.721 0 mon_cmd_maybe_osd_create fail: '(34) Numerical result out of range': (34) Numerical result out of range 2023-10-25 20:40:40.891367 7f1f478cde40 -1 osd.721 0 mon_cmd_maybe_osd_create fail: '(34) Numerical result out of range': (34) Numerical result out of range 2023-10-25 20:40:40.891409 7f1f478cde40 -1 osd.721 0 init unable to update_crush_location: (34) Numerical result out of range Thanks, Pardhiv _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx