Migrating to managed OSDs with ceph orch

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

I just updated a Ceph cluster from Nautilus to Octopus and followed the documentation in order to migrate from the original ceph-ansible setup to cephadm.

Overall, this worked great, but there's one part that I couldn't figure out yet and that doesn't seem to be documented: How do I migrate the OSDs to the new managed approach using service specifications?

Currently, "ceph orch ps" shows me each OSD and "ceph orch ls" lists them as "osd.2", with "9/0" running with unmanaged placement (iirc osd.2 was the first one I adopted so that's probably where the name comes from).

I tried writing a service specification that should match the current deployment and applying that, but the new entries are just sitting there at 0/3 running.

For node-exporter, I solved this problem by just removing the old containers and services manually and waiting for Ceph to recreate the new ones, but for OSDs that approach doesn't really seem practical (unless it was a matter of just stopping/removing the old container, but that doesn't seem to do the trick in my tests).

Is there a proper way to do this? Or is the cluster just stuck with unmanaged OSDs if it was created without cephadm?

Thanks,
Lukas
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux