Re: Remove failed OSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I ended up manually cleaning up the OSD host, removing stale LVs and DM
entries, and then purging the OSD with `ceph osd purge osd.19`. Looks like
it's gone for good.

/Z

On Sat, 4 May 2024 at 08:29, Zakhar Kirpichenko <zakhar@xxxxxxxxx> wrote:

> Hi!
>
> An OSD failed in our 16.2.15 cluster. I prepared it for removal and ran
> `ceph orch daemon rm osd.19 --force`. Somehow that didn't work as expected,
> so now we still have osd.19 in the crush map:
>
> -10         122.66965              host ceph02
>  19           1.00000                  osd.19     down         0  1.00000
>
> But OSD has been cleaned up on the host, although incompletely, as both
> block and block.db LVs still exist.
>
> If I try to remove the OSD again, I get an error:
>
> # ceph orch daemon rm osd.19  --force
> Error EINVAL: Unable to find daemon(s) ['osd.19']
>
> How can I clean up this OSD and get rid of it completely, including the
> crush map? I would appreciate any suggestions or pointers.
>
> Best regards,
> Zakhar
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux