Hi! I have been reading some ebooks of Ceph and some doc and learning about it. The goal of all it, is the fact of creating a rock solid storage por virtual machines. After all the learning I have not been able to answer by myself to this question so I was wondering if perhaps you could clarify my doubt. Let's imagine three datacenters, each one with for instance, 4 virtualization hosts. As I was planning to build a solution for diferent hypervisors I have been thinking in the following env. - I planed to have my Ceph storage (with different pools inside) with OSDs in three different datacenters (as failure point). - Each datacenter's hosts, will be accessing to a NFS redundant service in the own datacenter. - Each NFS redundant service of each datacenter will be composed by two NFS gateways accessing to the OSDs of the placement group located in the own datacenter. I planned achieving this with OSD weights and getting with that the fact that the crush algorithm to build the map so that each datacenter accesses end up having as master, the OSD of the own datacenter in the placement group. Obviously, slave OSD replicas will exist in the other three datacenters or even I don't discard the fact of using erasure coding in some manner. - The NFS gateways could be a NFS redundant gateway service from Ceph (I have seen now they have developed something for this purpose https://docs.ceph.com/en/quincy/mgr/nfs/) or perhaps two different Debian machines, accessing to Ceph with rados and sharing to the hypervisors that information over NFS. In case of Debian machines I have heard good results using pacemaker/corosync for providing HA to that NFS (between 0,5 and 3 seconds for fail over and service up again). What do you think about this plan?. Do you see it feasible?. We will work too with KVM and there we could access to Ceph directly but I would needed to provide too storage por Xen and Vmware. Thank you so much in advance, Cheers! _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx