Hey all, For more that I'm enjoying this discussion, it's completely out of my original question: ************ How to stop the automatic OSD creation from Ceph orchestrator? ************ The problem happens because using cinderlib, ovirt uses krbd (not librbd) and because of this, the kernel ( and Ceph orch) sees the disk. If there's no partition, Ceph tries to add it as an OSD, but fails, leaving the cluster in WARN state. The solution state in the manual doesn't work: # ceph orch apply osd --all-available-devices --unmanaged=true Besides this issue, Cinderlib is working pretty decent: - Disk creation/expansion works - live machine migration works - snapshot works Of course there are missing items, like: - live storage migration - disk moving from/to image storage domains (only copy works) - statistics from the pool ( like used/available space) But in general, it's production ready. /Ricardo On Mon, 31 Jan 2022, 09:37 Konstantin Shalygin, <k0ste@xxxxxxxx> wrote: > Hi, > > On 31 Jan 2022, at 11:38, Marc <Marc@xxxxxxxxxxxxxxxxx> wrote: > > This is incorrect. I am using live migration with Nautilus and stock > kernel on CentOS7 > > > > Mark, I think that you are confusing live migration of virtual machines > [1] and live migration of RBD images [2] inside the cluster (between pools, > for example) when the client is running > > > [1] https://libvirt.org/migration.html > [2] > https://docs.ceph.com/en/latest/rbd/rbd-live-migration/#image-live-migration > > k > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx