Re: Status of IPv4 / IPv6 dual stack?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

a note: we are running IPv6 only clusters since 2017, in case anyone has
questions. In earlier releases no tunings were necessary, later releases
need the bind parameters.

BR,

Nico

Stefan Kooman <stefan@xxxxxx> writes:

> On 15-09-2023 09:25, Robert Sander wrote:
>> Hi,
>> as the documentation sends mixed signals in
>> https://docs.ceph.com/en/latest/rados/configuration/network-config-ref/#ipv4-ipv6-dual-stack-mode
>> "Note
>> Binding to IPv4 is enabled by default, so if you just add the option
>> to bind to IPv6 you’ll actually put yourself into dual stack mode."
>> and
>> https://docs.ceph.com/en/latest/rados/configuration/msgr2/#address-formats
>> "Note
>> The ability to bind to multiple ports has paved the way for
>> dual-stack IPv4 and IPv6 support. That said, dual-stack operation is
>> not yet supported as of Quincy v17.2.0."
>> just the quick questions:
>> Is a dual stacked networking with IPv4 and IPv6 now supported or
>> not?
>>  From which version on is it considered stable?
>
> IIIRC, the "enable dual" stack PR's were more or less "accidentally"
> merged, at least that's what Radoslaw Zarzynski (added to CC) told me
> during the developer summit at Cephalocon in Amsterdam. There was a
> discussion about dual stack support after that. I voted in favor of
> not supporting dual stack. Currently there are no IPv6 (only) tests
> that are performed, it's IPv4 only. Let alone dual stack testing
> setups. It gets complicated quickly if you want to test all sort of
> combinations (some daemons with dual stack, some IPv4 only, some IPv6
> only, etc.).
>
>
>> Are OSDs now able to register themselves with two IP addresses in
>> the cluster map? MONs too?
>
> At least the OSDs and MDSs can, and caused trouble for kernels with
> messenger v2 support. We had to disable IPv4 explicitly to get rid of
> the IPv4 "0.0.0.0" addresses in the MDS map. See this thread [1].
>
> Gr. Stefan
>
> [1]:
> https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/thread/GLNS2S6BK7Q5ECUT3G53EP5CCXNFENXQ/
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx


--
Sustainable and modern Infrastructures by ungleich.ch
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux