HI, I build a Ceph Cluster with cephadm. Every cehp node has 4 OSDs. These 4 OSD were build with 4 HDD (block) and 1 SDD (DB). At present , one HDD is broken, and I am trying to replace the HDD,and build the OSD with the new HDD and the free space of the SDD. I did the follows: #ceph osd stop osd.23 #ceph osd out osd.23 #ceph osd crush remove osd.23 #ceph osd rm osd.23 #ceph orch daemon rm osd.23 --force #lvremove /dev/ceph-ae21e618-601e-4273-9185-99180edb8453/osd-block-96eda371-1a3f-4139-9123-24ec1ba362c4 #wipefs -af /dev/sda #lvremove /dev/ceph-e50203a6-8b8e-480f-965c-790e21515395/osd-db-70f7a032-cf2c-4964-b979-2b90f43f2216 #ceph orch daemon add osd compute11:data_devices=/dev/sda,db_devices=/dev/sdc,osds_per_device=1 The OSD can be built, but is always down. Is there anyting that I missed during the building? Thank you very much! Regards, LIUTao _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx