Re: Single Server Ceph OSD Recovery

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,


in addition you need a way to recover the mon maps (I assume the mon was on the same host). If the mon data is lost, you can try to retrieve some of the maps from the existing OSDs. See the documentation about desaster recovery in the ceph documentation.


If you cannot restore the mons, recovering the OSDs will be more or less useless.


Regards,

Burkhard


On 04.07.20 10:05, Eugen Block wrote:
Hi,

it should work with ceph-volume after you re-created the OS:

ceph-volume lvm activate --all

We had that case just recently in a Nautilus Cluster and it worked perfectly.

Regards,
Eugen


Zitat von Daniel Da Cunha <daniel@xxxxxx>:

Hello,
As a hobbies, I have been using Ceph Nautilus as a single server with 8 OSDs. Part of the setup I set the crush map to fail at OSD level: step chooseleaf firstn 0 type osd.

Sadly, I didn’t take the necessary precaution for my boot disk and the OS failed. I have backups of /etc/ceph/ but I am not able to recover the OS.

Can you think of a way for me to recreate the OS and adopt the 8 OSD without loosing the data?

Thanks & regards,
Daniel
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux