Re: Ceph storage project for virtualization

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Egoitz,

What network latency between datacenters?

Cheers

El 5/3/24 a las 11:31, egoitz@xxxxxxxxxxxxx escribió:
Hi!

I have been reading some ebooks of Ceph and some doc and learning about
it. The goal of all it, is the fact of creating a rock solid storage por
virtual machines. After all the learning I have not been able to answer
by myself to this question so I was wondering if perhaps you could
clarify my doubt.

Let's imagine three datacenters, each one with for instance, 4
virtualization hosts. As I was planning to build a solution for diferent
hypervisors I have been thinking in the following env.

- I planed to have my Ceph storage (with different pools inside) with
OSDs in three different datacenters (as failure point).

- Each datacenter's hosts, will be accessing to a NFS redundant service
in the own datacenter.

- Each NFS redundant service of each datacenter will be composed by two
NFS gateways accessing to the OSDs of the placement group located in the
own datacenter. I planned achieving this with OSD weights and getting
with that the fact that the crush algorithm to build the map so that
each datacenter accesses end up having as master, the OSD of the own
datacenter in the placement group. Obviously, slave OSD replicas will
exist in the other three datacenters or even I don't discard the fact of
using erasure coding in some manner.

- The NFS gateways could be a NFS redundant gateway service from Ceph (I
have seen now they have developed something for this purpose
https://docs.ceph.com/en/quincy/mgr/nfs/) or perhaps two different
Debian machines, accessing to Ceph with rados and sharing to the
hypervisors that information over NFS. In case of Debian machines I have
heard good results using pacemaker/corosync for providing HA to that NFS
(between 0,5 and 3 seconds for fail over and service up again).

What do you think about this plan?. Do you see it feasible?. We will
work too with KVM and there we could access to Ceph directly but I would
needed to provide too storage por Xen and Vmware.

Thank you so much in advance,

Cheers!
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

Eneko Lacunza
Zuzendari teknikoa | Director técnico
Binovo IT Human Project

Tel. +34 943 569 206 | https://www.binovo.es
Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun

https://www.youtube.com/user/CANALBINOVO
https://www.linkedin.com/company/37269706/
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux