Re: Ceph storage project for virtualization

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Eneko! 

I don't really have that data but I was planning to have as master OSD
only the ones in the same datacenter as the hypervisor using the
storage. The other datacenters would be just replicas. I assume you ask
it because replication is totally synchronous. 

Well for doing step by step. Imagine for the moment, the point of
failure is a rack and all the replicas will be in the same datacenter in
different racks and rows. In this case the latency should be acceptable
and low. 

My question was more related to the redundant nfs and if you have some
experience with similar setups. I was trying to know if first is
feasible what I'm planning to do. 

Thank you so much :) 

Cheers! 

El 2024-03-05 11:43, Eneko Lacunza escribió:

> Hi Egoitz,
> 
> What network latency between datacenters?
> 
> Cheers
> 
> El 5/3/24 a las 11:31, egoitz@xxxxxxxxxxxxx escribió: 
> 
>> Hi!
>> 
>> I have been reading some ebooks of Ceph and some doc and learning about
>> it. The goal of all it, is the fact of creating a rock solid storage por
>> virtual machines. After all the learning I have not been able to answer
>> by myself to this question so I was wondering if perhaps you could
>> clarify my doubt.
>> 
>> Let's imagine three datacenters, each one with for instance, 4
>> virtualization hosts. As I was planning to build a solution for diferent
>> hypervisors I have been thinking in the following env.
>> 
>> - I planed to have my Ceph storage (with different pools inside) with
>> OSDs in three different datacenters (as failure point).
>> 
>> - Each datacenter's hosts, will be accessing to a NFS redundant service
>> in the own datacenter.
>> 
>> - Each NFS redundant service of each datacenter will be composed by two
>> NFS gateways accessing to the OSDs of the placement group located in the
>> own datacenter. I planned achieving this with OSD weights and getting
>> with that the fact that the crush algorithm to build the map so that
>> each datacenter accesses end up having as master, the OSD of the own
>> datacenter in the placement group. Obviously, slave OSD replicas will
>> exist in the other three datacenters or even I don't discard the fact of
>> using erasure coding in some manner.
>> 
>> - The NFS gateways could be a NFS redundant gateway service from Ceph (I
>> have seen now they have developed something for this purpose
>> https://docs.ceph.com/en/quincy/mgr/nfs/) or perhaps two different
>> Debian machines, accessing to Ceph with rados and sharing to the
>> hypervisors that information over NFS. In case of Debian machines I have
>> heard good results using pacemaker/corosync for providing HA to that NFS
>> (between 0,5 and 3 seconds for fail over and service up again).
>> 
>> What do you think about this plan?. Do you see it feasible?. We will
>> work too with KVM and there we could access to Ceph directly but I would
>> needed to provide too storage por Xen and Vmware.
>> 
>> Thank you so much in advance,
>> 
>> Cheers!
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> 
> Eneko Lacunza
> Zuzendari teknikoa | Director técnico
> Binovo IT Human Project
> 
> Tel. +34 943 569 206 | https://www.binovo.es
> Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun
> 
> https://www.youtube.com/user/CANALBINOVO
> https://www.linkedin.com/company/37269706/
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux