Re: Full L3 Ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Stefan,

I think I have missed your reply. I'm interested to know how you manage the performance on running Ceph with host based VXLAN overlay. May be you can share the comparison for better understanding of possible performance impact.

Best regards,
 
Date: Sun, 25 Nov 2018 21:17:34 +0100
From: Stefan Kooman <stefan@xxxxxx>
To: "Robin H. Johnson" <robbat2@xxxxxxxxxx>
Cc: Ceph Users <ceph-users@xxxxxxxxxxxxxx>
Subject: Re: Full L3 Ceph
Message-ID: <20181125201734.GC17245@xxxxxxxxxxxxxxxx>
Content-Type: text/plain; charset="us-ascii"

Quoting Robin H. Johnson (robbat2@xxxxxxxxxx):
> On Fri, Nov 23, 2018 at 04:03:25AM +0700, Lazuardi Nasution wrote:
> > I'm looking example Ceph configuration and topology on full layer 3
> > networking deployment. Maybe all daemons can use loopback alias address in
> > this case. But how to set cluster network and public network configuration,
> > using supernet? I think using loopback alias address can prevent the
> > daemons down due to physical interfaces disconnection and can load balance
> > traffic between physical interfaces without interfaces bonding, but with
> > ECMP.
> I can say I've done something similar**, but I don't have access to that
> environment or most*** of the configuration anymore.
>
> One of the parts I do recall, was explicitly setting cluster_network
> and public_network to empty strings, AND using public_addr+cluster_addr
> instead, with routable addressing on dummy interfaces (NOT loopback).

You can do this with MP-BGP (VXLAN) EVPN. We are running it like that.
IPv6 overlay network only. ECMP to make use of all the links. We don't
use a seperate cluster network. That only complicates things, and
there's no real use for it (trademark by Wido den Hollander). If you
want to use BGP on the hosts themselves have a look at this post by
Vincent Bernat (great writeups of complex networking stuff) [1]. You can
use "MC-LAG" on the host to get redundant connectivity, or use "Type 4"
EVPN to get endpoint redundancy (Ethernet Segment Route). FRR 6.0 has
support for most of this (not yet "Type 4" EVPN support IIRC) [2].

We use a network namespace to seperate (IPv6) mangemant traffic
from production traffic. This complicates Ceph deployment a lot, but in
the end it's worth it.

Gr. Stefan

[1]: https://vincent.bernat.ch/en/blog/2017-vxlan-bgp-evpn
[2]: https://frrouting.org/


--
| BIT BV  http://www.bit.nl/        Kamer van Koophandel 09090351
| GPG: 0xD14839C6                   +31 318 648 688 / info@xxxxxx
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux