Hello, we are running everything IPv6 only. You just need to setup the MTU on your devices (nics, switches) correctly, nothing ceph or IPv6 specific required. If you are using SLAAC (like we do), you can also announce the MTU via RA. Best, Nico Jack <ceph@xxxxxxxxxxxxxx> writes: > Or maybe you reach that ipv4 directly, and that ipv6 via a router, somehow > > Check your routing table and neighbor table > > On 27/10/2017 16:02, Wido den Hollander wrote: >> >>> Op 27 oktober 2017 om 14:22 schreef Félix Barbeira <fbarbeira@xxxxxxxxx>: >>> >>> >>> Hi, >>> >>> I'm trying to configure a ceph cluster using IPv6 only but I can't enable >>> jumbo frames. I made the definition on the >>> 'interfaces' file and it seems like the value is applied but when I test it >>> looks like only works on IPv4, not IPv6. >>> >>> It works on IPv4: >>> >>> root@ceph-node01:~# ping -c 3 -M do -s 8972 ceph-node02 >>> >>> PING ceph-node02 (x.x.x.x) 8972(9000) bytes of data. >>> 8980 bytes from ceph-node02 (x.x.x.x): icmp_seq=1 ttl=64 time=0.474 ms >>> 8980 bytes from ceph-node02 (x.x.x.x): icmp_seq=2 ttl=64 time=0.254 ms >>> 8980 bytes from ceph-node02 (x.x.x.x): icmp_seq=3 ttl=64 time=0.288 ms >>> >> >> Verify with Wireshark/tcpdump if it really sends 9k packets. I doubt it. >> >>> --- ceph-node02 ping statistics --- >>> 3 packets transmitted, 3 received, 0% packet loss, time 2000ms >>> rtt min/avg/max/mdev = 0.254/0.338/0.474/0.099 ms >>> >>> root@ceph-node01:~# >>> >>> But *not* in IPv6: >>> >>> root@ceph-node01:~# ping6 -c 3 -M do -s 8972 ceph-node02 >>> PING ceph-node02(x:x:x:x:x:x:x:x) 8972 data bytes >>> ping: local error: Message too long, mtu=1500 >>> ping: local error: Message too long, mtu=1500 >>> ping: local error: Message too long, mtu=1500 >>> >> >> Like Ronny already mentioned, check the switches and the receiver. There is a 1500 MTU somewhere configured. >> >> Wido >> >>> --- ceph-node02 ping statistics --- >>> 4 packets transmitted, 0 received, +4 errors, 100% packet loss, time 3024ms >>> >>> root@ceph-node01:~# >>> >>> >>> >>> root@ceph-node01:~# ifconfig >>> eno1 Link encap:Ethernet HWaddr 24:6e:96:05:55:f8 >>> inet6 addr: 2a02:x:x:x:x:x:x:x/64 Scope:Global >>> inet6 addr: fe80::266e:96ff:fe05:55f8/64 Scope:Link >>> UP BROADCAST RUNNING MULTICAST *MTU:9000* Metric:1 >>> RX packets:633318 errors:0 dropped:0 overruns:0 frame:0 >>> TX packets:649607 errors:0 dropped:0 overruns:0 carrier:0 >>> collisions:0 txqueuelen:1000 >>> RX bytes:463355602 (463.3 MB) TX bytes:498891771 (498.8 MB) >>> >>> lo Link encap:Local Loopback >>> inet addr:127.0.0.1 Mask:255.0.0.0 >>> inet6 addr: ::1/128 Scope:Host >>> UP LOOPBACK RUNNING MTU:65536 Metric:1 >>> RX packets:127420 errors:0 dropped:0 overruns:0 frame:0 >>> TX packets:127420 errors:0 dropped:0 overruns:0 carrier:0 >>> collisions:0 txqueuelen:1 >>> RX bytes:179470326 (179.4 MB) TX bytes:179470326 (179.4 MB) >>> >>> root@ceph-node01:~# >>> >>> root@ceph-node01:~# cat /etc/network/interfaces >>> # This file describes network interfaces avaiulable on your system >>> # and how to activate them. For more information, see interfaces(5). >>> >>> source /etc/network/interfaces.d/* >>> >>> # The loopback network interface >>> auto lo >>> iface lo inet loopback >>> >>> # The primary network interface >>> auto eno1 >>> iface eno1 inet6 auto >>> post-up ifconfig eno1 mtu 9000 >>> root@ceph-node01:# >>> >>> >>> Please help! >>> >>> -- >>> Félix Barbeira. >>> _______________________________________________ >>> ceph-users mailing list >>> ceph-users@xxxxxxxxxxxxxx >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >> _______________________________________________ >> ceph-users mailing list >> ceph-users@xxxxxxxxxxxxxx >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >> > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com -- Modern, affordable, Swiss Virtual Machines. Visit www.datacenterlight.ch _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com