It works for me on 17.2.6 as well. Could you be more specific what
doesn't work for you? Running that command only removes the cluster
configs etc. on that host, it does not orchestrate a removal on all
hosts, not sure if you're aware of that.
Zitat von Vahideh Alinouri <vahideh.alinouri@xxxxxxxxx>:
The version that has been installed is 17.2.5. But this method does not
work at all.
On Fri, Feb 23, 2024, 10:23 AM Eugen Block <eblock@xxxxxx> wrote:
Which ceph version is this? In a small Reef test cluster this works as
expected:
# cephadm rm-cluster --fsid 2851404a-d09a-11ee-9aaa-fa163e2de51a
--zap-osds --force
Using recent ceph image
registry.cloud.hh.nde.ag/ebl/ceph-upstream@sha256:057e08bf8d2d20742173a571bc28b65674b055bebe5f4c6cd488c1a6fd51f685
Zapping /dev/sdb...
Zapping /dev/sdc...
Zapping /dev/sdd...
and lsblk shows empty drives.
Zitat von Vahideh Alinouri <vahideh.alinouri@xxxxxxxxx>:
> Hi Guys,
>
> I faced an issue. When I wanted to purge, the cluster was not purged
> using the below command:
>
> ceph mgr module disable cephadm
> cephadm rm-cluster --force --zap-osds --fsid <fsid>
>
> The OSDs will remain. There should be some cleanup methods for the
> whole cluster, not just MON nodes. Is there anything related to this?
>
> Regards
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx