Re: ceph octopus mysterious OSD crash

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



mkay.
Sooo... what's the new and nifty proper way to clean this up?
The outsider's view is,
"I should just be able to run   'ceph orch osd rm 33'"

but that returns
Unable to find OSDs: ['33']


----- Original Message -----
From: "Stefan Kooman" <stefan@xxxxxx>
To: "Philip Brown" <pbrown@xxxxxxxxxx>
Cc: "ceph-users" <ceph-users@xxxxxxx>
Sent: Thursday, March 18, 2021 10:09:28 PM
Subject: Re:  ceph octopus mysterious OSD crash

On 3/19/21 2:20 AM, Philip Brown wrote:
> yup cephadm and orch was used to set all this up.
> 
> Current state of things:
> 
> ceph osd tree shows
> 
>   33    hdd    1.84698              osd.33       destroyed         0  1.00000


^^ Destroyed, ehh, this doesn't look good to me. Ceph thinks this OSD is 
destroyed. Do you know what might have happened to osd.33? Did you 
perform a "kill an OSD" while testing?

AFAIK you can't fix that anymore. You will have to remove it and redploy 
it. Might even get a new osd.id.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux