Hello Josh,
the osd is "down", it was down since it experienced read/write errors on
the old disk which I as said removed after the rebalance of the cluster.
And its still now in "down" state.
root@ceph4:~# ceph osd tree down
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 523.97095 root default
-9 58.21899 host ceph4
49 hdd 3.63899 osd.49 down 0 1.00000
I am unsure if a destroy would work because of the old pv that is still
around which is visible in the output of pvs (look at the UUID
"5d1acce2-ba98-4b4c-81bd-f52a3309161f". Of cource this pv is actually no
longer existant because I pulled the old disk and inserted a new one.
Rainer
Am 25.04.22 um 18:26 schrieb Josh Baergen:
On Mon, Apr 25, 2022 at 10:22 AM Rainer Krienke <krienke@xxxxxxxxxxxxxx> wrote:
Hello,
Hi!
--> RuntimeError: The osd ID 49 is already in use or does not exist.
This error indicates that the issue is with the osd ID itself, not
with the disk or lvm state. Do you need to run a "ceph osd destroy 49"
first? (You could check "ceph osd tree down" to see if the osd is in a
"down" or "destroyed" state.)
Josh
--
Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1
56070 Koblenz, Web: http://www.uni-koblenz.de/~krienke, Tel: +49261287 1312
PGP: http://www.uni-koblenz.de/~krienke/mypgp.html, Fax: +49261287
1001312
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx