On Fri, Sep 30, 2022 at 7:36 PM Filipe Mendes <filipehdbr@xxxxxxxxx> wrote: > > Hello! > > > I'm considering switching my current storage solution to CEPH. Today we use > iscsi as a communication protocol and we use several different hypervisors: > VMware, hyper-v, xcp-ng, etc. Hi Filipe, Ceph's main hypervisor target has always been QEMU/KVM but xcp-ng (i.e. Xen) has native support support as well. Starting with pacific, RBD is natively supported on Windows. Lucian and the team are continuously improving it so Hyper-V use case should be covered as well. > > > I was reading that the current version of CEPH has discontinued iscsi > support in favor of RBD or Nvmeof. I imagine there are thousands of > projects in production using different hypervisors connecting to ceph via > iscsi, so I was curious that I did not find much discussion on the topic in > forums or mailings, since so many projects depend on both: ceph + iscsi, > and that RBD only communicates well with Proxmox or openstack. Also nvmeof > is not fully supported on ceph and many other popular hypervisors. Upstream, ceph-iscsi gateway isn't going anywhere -- at least not until ceph-nvmeof gateway matures enough (currently it's very much WIP). I don't expect any active development to happen on ceph-iscsi but it should still be maintained and the packages would remain available. > > > So the trend is that other hypervisors will start to support RBD over time, > or that they will start to support nvmeof at the same time that ceph > implements it stably? > > > Am I missing or maybe mixing something? I'm not sure what you mean by NVMeOF not being fully (?) supported on various hypervisors. Strictly speaking you don't need any support in the hypervisor at all: as long as your guest OS supports NVMeOF, you can set it up in the VM itself, exactly the same as with iSCSI. Thanks, Ilya _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx