Re: Ceph with BGP?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Stefan, thanks a lot for the reply.

   We have everything in the same datacenter, and the fault domains are per
rack. Regarding BGP, the idea of the networking team is to stop using layer
2+3 and move everything to full layer 3, for this they want to implement
BGP and the idea is that each node is connected to a different ToR edge
switch and that the clients have a single IP and can then reach the entire
cluster.
   Currently, we have an environment configured with a specific VLAN
without GW, and they want to get the VLAN out of the way and that each node
has its own IP with its own GW (and that is the ToR Switch). We already
have a separate cluster network that is running on Infiniband and it's
completely separated. So the idea is to use BGP on the public network only.

Thanks in advance,

Cheers,

On Tue, Jul 6, 2021 at 2:10 AM Stefan Kooman <stefan@xxxxxx> wrote:

> On 7/5/21 6:26 PM, German Anders wrote:
> > Hi All,
> >
> >     I have an already created and functional ceph cluster (latest
> luminous
> > release) with two networks one for the public (layer 2+3) and the other
> for
> > the cluster, the public one uses VLAN and its 10GbE and the other one
> uses
> > Infiniband with 56Gb/s, the cluster works ok. The public network uses
> > Juniper QFX5100 switches with VLAN in layer2+3 configuration but the
> > network team needs to move to a full layer3 and they want to use BGP, so
> > the question is, how can we move to that schema? What are the
> > considerations? Is it possible? Is there any step-by-step way to move to
> > that schema? Also is anything better than BGP or other alternatives?
>
> Ceph doesn't care at all. Just as long as the nodes can communicate to
> each other, it's fine. It depends on your failure domains how easy you
> can move to this L3 model. Do you have separate datacenters that you can
> do one by one, or separate racks?
>
> And you can do BGP on different levels: router, top of rack switches, or
> even on the Ceph host itselfs (FRR).
>
> We use BGP / VXLAN / EVPN for our Ceph cluster. But it all depends on
> why your networking teams wants to change to L3, and why.
>
> There are no step by step guides, as most deployments are unique.
>
> This might be a good time to reconsider a separate cluster network.
> Normally there is no need for that, and might make things simpler.
>
> Do you have separate storage switches? Whre are your clients connected
> to (separate switches or connected to storage switches as well).
>
> This is not easy to answer without all the details. But for sure there
> are cluster running with BGP in the field just fine.
>
> Gr. Stefan
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux