Hi,
I target to run just destroy and re-use the ID as stated in manual but seems not working.
Seems I’m unable to re-use the ID ?
Thanks.
/stwong
From: Paul Emmerich <paul.emmerich@xxxxxxxx>
Sent: Friday, July 5, 2019 5:54 PM
To: ST Wong (ITSC) <ST@xxxxxxxxxxxxxxxx>
Cc: Eugen Block <eblock@xxxxxx>; ceph-users@xxxxxxxxxxxxxx
Subject: Re: [ceph-users] ceph-volume failed after replacing disk
Hi,
Yes, I run the commands before:
# ceph osd crush remove osd.71
device 'osd.71' does not appear in the crush map
# ceph auth del osd.71
entity osd.71 does not exist
which is probably the reason why you couldn't recycle the OSD ID.
Either run just destroy and re-use the ID or run purge and not re-use the ID.
Manually deleting auth and crush entries is no longer needed since purge was introduced.
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at
https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
Thanks.
/stwong
-----Original Message-----
From: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> On Behalf Of Eugen Block
Sent: Friday, July 5, 2019 4:54 PM
To: ceph-users@xxxxxxxxxxxxxx
Subject: Re: [ceph-users] ceph-volume failed after replacing disk
Hi,
did you also remove that OSD from crush and also from auth before recreating it?
ceph osd crush remove osd.71
ceph auth del osd.71
Regards,
Eugen
Zitat von "ST Wong (ITSC)" <ST@xxxxxxxxxxxxxxxx>:
> Hi all,
>
> We replaced a faulty disk out of N OSD and tried to follow steps
> according to "Replacing and OSD" in
>
http://docs.ceph.com/docs/nautilus/rados/operations/add-or-rm-osds/,
> but got error:
>
> # ceph osd destroy 71--yes-i-really-mean-it # ceph-volume lvm create
> --bluestore --data /dev/data/lv01 --osd-id
> 71 --block.db /dev/db/lv01
> Running command: /bin/ceph-authtool --gen-print-key Running command:
> /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring
> /var/lib/ceph/bootstrap-osd/ceph.keyring osd tree -f json
> --> RuntimeError: The osd ID 71 is already in use or does not exist.
>
> ceph -s still shows N OSDS. I then remove with "ceph osd rm 71".
> Now "ceph -s" shows N-1 OSDS and id 71 doesn't appear in "ceph osd
> ls".
>
> However, repeating the ceph-volume command still gets same error.
> We're running CEPH 14.2.1. I must have some steps missed. Would
> anyone please help? Thanks a lot.
>
> Rgds,
> /stwong
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
|
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com