Hi, Yes, I run the commands before: # ceph osd crush remove osd.71 device 'osd.71' does not appear in the crush map # ceph auth del osd.71 entity osd.71 does not exist Thanks. /stwong -----Original Message----- From: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> On Behalf Of Eugen Block Sent: Friday, July 5, 2019 4:54 PM To: ceph-users@xxxxxxxxxxxxxx Subject: Re: ceph-volume failed after replacing disk Hi, did you also remove that OSD from crush and also from auth before recreating it? ceph osd crush remove osd.71 ceph auth del osd.71 Regards, Eugen Zitat von "ST Wong (ITSC)" <ST@xxxxxxxxxxxxxxxx>: > Hi all, > > We replaced a faulty disk out of N OSD and tried to follow steps > according to "Replacing and OSD" in > http://docs.ceph.com/docs/nautilus/rados/operations/add-or-rm-osds/, > but got error: > > # ceph osd destroy 71--yes-i-really-mean-it # ceph-volume lvm create > --bluestore --data /dev/data/lv01 --osd-id > 71 --block.db /dev/db/lv01 > Running command: /bin/ceph-authtool --gen-print-key Running command: > /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring > /var/lib/ceph/bootstrap-osd/ceph.keyring osd tree -f json > --> RuntimeError: The osd ID 71 is already in use or does not exist. > > ceph -s still shows N OSDS. I then remove with "ceph osd rm 71". > Now "ceph -s" shows N-1 OSDS and id 71 doesn't appear in "ceph osd > ls". > > However, repeating the ceph-volume command still gets same error. > We're running CEPH 14.2.1. I must have some steps missed. Would > anyone please help? Thanks a lot. > > Rgds, > /stwong _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com