Hello! I'm considering switching my current storage solution to CEPH. Today we use iscsi as a communication protocol and we use several different hypervisors: VMware, hyper-v, xcp-ng, etc. I was reading that the current version of CEPH has discontinued iscsi support in favor of RBD or Nvmeof. I imagine there are thousands of projects in production using different hypervisors connecting to ceph via iscsi, so I was curious that I did not find much discussion on the topic in forums or mailings, since so many projects depend on both: ceph + iscsi, and that RBD only communicates well with Proxmox or openstack. Also nvmeof is not fully supported on ceph and many other popular hypervisors. So the trend is that other hypervisors will start to support RBD over time, or that they will start to support nvmeof at the same time that ceph implements it stably? Am I missing or maybe mixing something? Filipe _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx