Re: [Ceph-community] Regarding Technical Possibility of Configuring Single Ceph Cluster on Different Networks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jun 10, 2016 at 3:01 AM, Venkata Manojawa Paritala
<manojawapv@xxxxxxxxxx> wrote:
> Hello Friends,
>
> I am Manoj Paritala, working in Vedams Software Solutions India Pvt Ltd,
> Hyderabad, India. We are developing a POC with the below specification. I
> would like to know if it is technically possible to configure a Single Ceph
> cluster with this requirement. Please find attached the network diagram for
> more clarity on what we are trying to setup.
>
> 1. There should be 3 OSD nodes (machines), 3 Monitor nodes (machines) and 3
> Client nodes in the Ceph cluster.
>
> 2. There are 3 data centers with 3 different networks. Lets call each Data
> center a Site. So, we have Site1, Site2 and Site3 with different networks.
>
> 3. Each Site should have One OSD node + Monitor node + Client node.
>
> 4. In each Site there should be again 2 sub-networks.
>
> 4a. Site Public Network :- Where in the Ceph Clients, OSDs and Monitor would
> connect.
> 4b. Site Cluster Network :- Where in only OSDs communicate for replication
> and rebalancing.
>
> 5. Configure routing between Cluster networks across sites, in such a way
> that OSD in one site can communicate to the OSDs on other sites.
>
> 6. Configure routing between Site Public Networks across, in such a way that
> ONLY the Monitor & OSD nodes in each site can communicate to the nodes in
> other sites. PLEASE NOTE, CLIENTS IN ONE SITE WILL NOT BE ABLE TO
> COMMUNICATE TO OSDs/CLIENTS ON OTHER SITES.

This won't work. The clients need to communicate with the primary OSD for the pg
not just any OSD so will need access to all OSDs.

A configuration like this is a stretched cluster and the links between
the DCs will kill
performance once you load them up or once recovery is occurring. Do
the links between
your Dcs meet the stated requirements here?

http://docs.ceph.com/docs/master/start/hardware-recommendations/#networks

>
> Hoping that my requirement is clear. Please let me know if I am not clear on
> any step.
>
> Actually, based on our reading, our understanding is that 2-way replication
> between 2 different Ceph clusters is not possible. To overcome the same, we
> came up with the above configuration, which will allow us to create pools
> with OSDs on different sites / data centers and is useful for disaster
> recovery.

I don't think this configuration will work as you expect.

>
> In case our proposed configuration is not possible, can you please suggest
> us an alternative approach to achieve our requirement.

What is your requirement, it's not clearly stated.

Cheers,
Brad

>
> Thanks & Regards,
> Manoj
>
> _______________________________________________
> Ceph-community mailing list
> Ceph-community@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-community-ceph.com
>



-- 
Cheers,
Brad
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux