Hi,
I posted a link to the docs [1], [2] just yesterday ;-)
You should see the respective OSD in the output of 'cephadm
ceph-volume lvm list' on that node. You should then be able to get it
back to cephadm with
cephadm deploy --name osd.x
But I haven't tried this yet myself, so please report back if that
works for you.
Regards,
Eugen
[1] https://tracker.ceph.com/issues/49159
[2] https://tracker.ceph.com/issues/46691
Zitat von mabi <mabi@xxxxxxxxxxxxx>:
Hello,
I have by mistake re-installed the OS of an OSD node of my Octopus
cluster (managed by cephadm). Luckily the OSD data is on a separate
disk and did not get affected by the re-install.
Now I have the following state:
health: HEALTH_WARN
1 stray daemon(s) not managed by cephadm
1 osds down
1 host (1 osds) down
To fix that I tried to run:
# ceph orch daemon add osd ceph1f:/dev/sda
Created no osd(s) on host ceph1f; already created?
That did not work, so I tried:
# ceph cephadm osd activate ceph1f
no valid command found; 10 closest matches:
...
Error EINVAL: invalid command
Did not work either. So I wanted to ask how can I "adopt" back an
OSD disk to my cluster?
Thanks for your help.
Regards,
Mabi
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx