Good day,
I'm having an issue re-deploying a host back into my production ceph cluster.
Followed by "ceph osd crush rm {host}" to remove the host bucket from the crush map.
I also ran "ceph-deploy purge {host}" & "ceph-deploy purgedata {host}" from the management node.
OS: Ubuntu 16.04.3 LTS
Ceph version: 12.2.1 / 12.2.2 - Luminous
Kind regards
Geoffrey Rhodes
I'm having an issue re-deploying a host back into my production ceph cluster.
Due to some bad memory (picked up by a scrub) which has been replaced I felt the need to re-install the host to be sure no host files were damaged.
Prior to decommissioning the host I set the crush weight's on each osd to 0.
Once to osd's had flushed all data I stopped the daemon's.
I then purged the osd's from the crushmap with "ceph osd purge".Followed by "ceph osd crush rm {host}" to remove the host bucket from the crush map.
I then reinstalled the host and made the necessary config changes followed by the appropriate ceph-deploy commands (ceph-deploy install..., ceph-deploy admin..., ceph-deploy osd create...) to bring the host & it's osd's back into the cluster, - same as I would when adding a new host node to the cluster.
Running ceph osd df tree shows the osd's however the host node is not displayed.
Inspecting the crush map I see no host bucket has been created or any host's osd's listed.
The osd's also did not start which explains the weight being 0 but I presume the osd's not starting isn't the only issue since the crush map lacks the newly installed host detail.
Could anybody maybe tell me where I've gone wrong?
I'm also assuming there shouldn't be an issue using the same host name again or do I manually add the host bucket and osd detail back into the crush map or should ceph-deploy not take care of that?
Thanks
OS: Ubuntu 16.04.3 LTS
Ceph version: 12.2.1 / 12.2.2 - Luminous
Kind regards
Geoffrey Rhodes
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com