Hi,
yes you can activate existing OSDs [1] as if you reinstalled a server
(for example if the host OS was damaged). I wrote a blog post [2] a
few years ago for an early Octopus version in a virtual lab
environment where I describe a manual procedure to reintroduce
existing OSDs on a new host. I haven't proof read that in a while so
it might be a bit outdated but it could help with troubleshooting.
After the renamed host is back in the cluster you'll need to run:
ceph cephadm osd activate <host>
I'm not sure if this will just work as it wasn't part of my tests back then.
Regards,
Eugen
[1]
https://docs.ceph.com/en/latest/cephadm/services/osd/#activate-existing-osds
[2]
https://heiterbiswolkig.blogs.nde.ag/2021/02/08/cephadm-reusing-osds-on-reinstalled-server/
Zitat von Deep Dish <deeepdish@xxxxxxxxx>:
Hello. We have a requirement to change the hostname on some of our OSD
nodes. All of our nodes are Ubuntu 22.04 based and have been deployed
using 17.2.7 Orchestrator.
1. Is there a procedure to rename the existing node, without rebuilding
and have it detected by Ceph Orchestrator?
If not,
2. To minimize impact on cluster (rebuilding OSDs / balancing, etc) Is it
possible to REINTRODUCE existing OSDs into the cluster a the newly rebuilt
node? Is there a ceph orch process to scan local node OSDs, detect and
create OSD daemons?
Thank you.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx