On 15-09-2023 10:25, Stefan Kooman wrote:
I could just nuke the whole dev cluster, wipe all disks and start
fresh after reinstalling the hosts, but as I have to adopt 17 clusters
to the orchestrator, I rather get some learnings from the not working
There is actually a cephadm "kill it with fire" option to do that for
you, but yeah, make sure you know how to fix it when things do not go
according to plan. It all magically works, until it doesn't 😉.
cephadm rm-cluster --fsid your-fsid-here --force
... ss a last resort (short of wipefs / shred on all disks).
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx