Multiple L2 LAN segments with Ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Travis,

We run a routed ECMP spine-leaf network architecture with Ceph and have 
no issues on the network side whatsoever. Each leaf switch has an L2 
cidr block inside a common L3 supernet.

We do not currently split cluster_network and public_network. If we did, 
we'd likely build a separate spine-leaf network with it's own L3 supernet.

A simple IPv4 example:

- ceph-cluster: 10.1.0.0/16
     - cluster-leaf1: 10.1.1.0/24
         - node1: 10.1.1.1/24
         - node2: 10.1.1.2/24
     - cluster-leaf2: 10.1.2.0/24

- ceph-public: 10.2.0.0/16
     - public-leaf1: 10.2.1.0/24
         - node1: 10.2.1.1/24
         - node2: 10.2.1.2/24
     - public-leaf2: 10.2.2.0/24

ceph.conf would be:

cluster_network: 10.1.0.0/255.255.0.0
public_network: 10.2.0.0/255.255.0.0

- Mike Dawson

On 5/28/2014 1:01 PM, Travis Rhoden wrote:
> Hi folks,
>
> Does anybody know if there are any issues running Ceph with multiple L2
> LAN segements?  I'm picturing a large multi-rack/multi-row deployment
> where you may give each rack (or row) it's own L2 segment, then connect
> them all with L3/ECMP in a leaf-spine architecture.
>
> I'm wondering how cluster_network (or public_network) in ceph.conf works
> in this case.  Does that directive just tell a daemon starting on a
> particular node which network to bind to?  Or is a CIDR that has to be
> accurate for every OSD and MON in the entire cluster?
>
> Thanks,
>
>   - Travis
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux