Re: Undo ceph osd destroy?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I don't know if the ceph version is relevant here but I could undo that quite quickly in my small test cluster (Octopus native, no docker). After the OSD was marked as "destroyed" I recreated the auth caps for that OSD_ID (marking as destroyed removes cephx keys etc.), changed the keyring in /var/lib/ceph/osd/ceph-1/keyring to reflect that and restarted the OSD, now it's up and in again. Is the OSD in your case actually up and running?

Regards,
Eugen


Zitat von Michael Fladischer <michael@xxxxxxxx>:

Hi,

I accidentally destroyed the wrong OSD in my cluster. It is now marked as "destroyed" but the HDD is still there and data was not touch AFAICT. I was able to avtivate it again using ceph-volume lvm activate and I can make the OSD as "in" but it's status is not changing from "destroyed".

Is there a way to unmark it so I can reintegrate it in the cluster?

Regards,
Michael
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux