Re: replace osd with Octopus

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

assuming you deployed with cephadm since you're mentioning Octopus there's a brief section in [1]. The basis for the OSD deployment is the drive_group configuration. If nothing has changed in your setup and you replace an OSD cephadm will detect the available disk and match it with the drive_group config. If there's enough space on the SSD too, it will redeploy the OSD.

The same goes for your second case: you'll need to remove all OSDs from that host, zap the devices, replace the SSD and then cephadm will deploy the entire host. That's the simple case. If redeploying all OSDs on that host is not an option you'll probably have to pause orchestrator in order to migrate devices yourself to prevent to much data movement.

Regards,
Eugen


[1] https://docs.ceph.com/en/latest/mgr/orchestrator/#replace-an-osd


Zitat von Tony Liu <tonyliu0592@xxxxxxxxxxx>:

Hi,

I did some search about replacing osd, and found some different
steps, probably for different release?
Is there recommended process to replace an osd with Octopus?
Two cases here:
1) replace HDD whose WAL and DB are on a SSD.
1-1) failed disk is replaced by the same model.
1-2) working disk is replaced by bigger one.
2) replace the SSD holding WAL and DB for multiple HDDs.


Thanks!
Tony
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux