It's a shame to not see ovirt fully integrated with Ceph. Even Proxmox can do it. I also understand those limitations on ceph/ovirt usage, but I believe those small issues can be overtaken. I am still hoping to see a better integration. Does anyone know who to make ceph stop trying to add the rbd devices (add a disk filter) or totally stop the osd auto create service? On Thu, Jan 27, 2022 at 6:28 PM Konstantin Shalygin <k0ste@xxxxxxxx> wrote: > Hi, > > The oVirt Storage team just drop old Cinder integration, and make > cinderlib integration (MBS) without librbd support > > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1997241 > [2] https://bugzilla.redhat.com/show_bug.cgi?id=2027719 > > Features like live-migration, easy Ceph version updates and, disk > snapshot's is inaccessible for oVirt users anymore > > [3] https://bugzilla.redhat.com/show_bug.cgi?id=1899453 > > I was try to make attention about this downgrades for a year, but I think > that the oVirt storage team does not really understand what the engineers > need and do what is best for them > > P.S: I don’t even want to think what will happen if 300-400 rbd disks will > be connected to the kernel in same time... > > k > > Sent from my iPhone > > On 27 Jan 2022, at 13:16, Ricardo Alonso <ricardoalonsos@xxxxxxxxx> wrote: > > Ovirt uses Ceph via CinderLib, and, differently from Openstack, the rbd > devices are mapped to the hypervisor instead of the VM using it directly (I > hope this can also change) > > -- Ricardo Alonso ricardoalonsos@xxxxxxxxx +44 7340-546916 - UK +55 (31) 4042-0266 - Brazil Skype: ricardoalonso GPG Fingerprint: FC7E 4A5F B7A4 87F4 6876 5325 D95F BFBF B7AC EE54 _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx