Hi, Assuming you are using cephadm? Checkout this https://docs.ceph.com/en/latest/cephadm/osd/#activate-existing-osds ceph cephadm osd activate <host>... 在 2021年3月11日,23:01,Cloud Guy <cloudguy25@xxxxxxxxx> 写道: Hello, TL;DR Looking for guidance on ceph-volume lvm activate --all as it would apply to a containerized ceph deployment (Nautilus or Octopus). Detail: I’m planning to upgrade my Nautilus non-container cluster to Octopus (eventually containerized). There’s an expanded procedure that was tested and working in our lab, however won’t go into the whole process. My question is around existing OSD hosts. I have to re-platform the host OS, and one of the ways in the OSDs were reactivated previously when this was done (non-containerized) was to install ceph packages, deploy keys, config, etc. then run ceph-volume lvm activate --all to magically bring up all OSDs. Looking for a similar approach except if the OSDs are containerized, and I re-platform the host OS (Centos -> Ubuntu), how could I reactivate all OSDs as containers and avoid rebuilding data on the OSDs? Thank you. _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx