Re: Ceph OSD network with IPv6 SLAAC networks?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We are implementing an IPv6 native ceph cluster using SLAAC. We have some legacy machines that are not capable of using IPv6, only IPv4 due to some reasons (yeah, I know). I'm wondering what could happen if I use an additional IPv4 on the radosgw in addition to the IPv6 that is already running. The rest of the ceph cluster components only have IPv6, the radosgw would be the only one with IPv4. Do you think that this would be a good practice or should I stick to only IPv6?

2017-03-31 17:36 GMT+02:00 Wido den Hollander <wido@xxxxxxxx>:

> Op 30 maart 2017 om 20:13 schreef Richard Hesse <richard.hesse@xxxxxxxxxx>:
>
>
> Thanks for the reply Wido! How do you handle IPv6 routes and routing with
> IPv6 on public and cluster networks? You mentioned that your cluster
> network is routed, so they will need routes to reach the other racks. But
> you can't have more than 1 default gateway. Are you running a routing
> protocol to handle that?
>

I don't. These clusters run without a public nor cluster network. Each host has 1 IP-Address.

I rarely use public/cluster networks as they don't add anything for most systems. 20Gbit of bandwidth per node is more then enough in most cases and my opinion is that multiple IPs per machine only add complexity.

Wido

> We're using classless static routes via DHCP on v4 to solve this problem,
> and I'm curious what the v6 SLAAC equivalent was.
>
> Thanks,
> -richard
>
> On Tue, Mar 28, 2017 at 8:30 AM, Wido den Hollander <wido@xxxxxxxx> wrote:
>
> >
> > > Op 27 maart 2017 om 21:49 schreef Richard Hesse <
> > richard.hesse@xxxxxxxxxx>:
> > >
> > >
> > > Has anyone run their Ceph OSD cluster network on IPv6 using SLAAC? I know
> > > that ceph supports IPv6, but I'm not sure how it would deal with the
> > > address rotation in SLAAC, permanent vs outgoing address, etc. It would
> > be
> > > very nice for me, as I wouldn't have to run any kind of DHCP server or
> > use
> > > static addressing -- just configure RA's and go.
> > >
> >
> > Yes, I do in many clusters. Works fine! SLAAC doesn't generate random
> > addresses which change over time. That's a feature called 'Privacy
> > Extensions' and is controlled on Linux by:
> >
> > - net.ipv6.conf.all.use_tempaddr
> > - net.ipv6.conf.default.use_tempaddr
> > - net.ipv6.conf.X.use_tempaddr
> >
> > Set this to 0 and the kernel will generate one address based on the
> > MAC-Address (EUI64) of the interface. This address is stable and will not
> > change.
> >
> > I like this very much as I don't have any static or complex network
> > configurations on the hosts. It moves the whole responsibility of
> > networking and addresses to the network. A host just boots and obtains a IP.
> >
> > The OSDs contact the MONs on boot and they will tell them their address.
> > OSDs do not need a fixed address for Ceph.
> >
> > However, using SLAAC without Privacy Extensions means that in practice the
> > address will not change of a machine, so you don't need to worry about it
> > that much.
> >
> > The biggest system I have running this way is 400 nodes running IPv6-only.
> > 10 racks, 40 nodes per rack. Each rack has a Top-of-Rack switch running in
> > Layer 3 and a /64 is assigned per rack.
> >
> > Layer 3 routing is used between the racks that based on the IPv6 address
> > we can even determine in which rack the host/OSD is.
> >
> > Layer 2 domains don't expand over racks which makes a rack a true failure
> > domain in our case.
> >
> > Wido
> >
> > > On that note, does anyone have any experience with running ceph in a
> > mixed
> > > v4 and v6 environment?
> > >
> > > Thanks,
> > > -richard
> > > _______________________________________________
> > > ceph-users mailing list
> > > ceph-users@xxxxxxxxxxxxxx
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Félix Barbeira.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux