Le 01/04/2020 à 08:29, James, GleSYS a écrit : > Hi Gilles, > > Yes, your configuration works with Netplan on Ubuntu 18 as well. However, this would use only one of the physical interfaces (the current active interface for the bond) for both networks. > > The reason I want to create two bonds is to have enp179s0f0 as active for the public network, and enp179s0f1 as active for the cluster network, therefore spreading the traffic across the nics. > > Regards, > James. Ha, I understand. Here I have 4 nics, so public and cluster networks are separate (2 nics, 1 bond each). >> On 31 Mar 2020, at 18:33, Gilles Mocellin <gilles.mocellin@xxxxxxxxxxxxxx> wrote: >> >> Hello, >> >> I don't use netplan, and still on Ubuntu 16.04. >> But I use VLAN? on the bond, not directly on the interfaces : >> >> bond0 : >> - enp179s0f0 >> - enp179s0f1 >> >> Then I use bond0.323 and bond0.324. >> >> (I use a bridge on top to be more like my OpenStack cluster, and with more friendly names : br-mgmt, br-storage, br-replic...) >> >> >> Le 31/03/2020 à 15:32, James McEwan a écrit : >>> Hi, >>> >>> I am currently building a 10 node Ceph cluster, each OSD node has 2x 25 Gbit/s nics, and I have 2 TOR switches (mlag not supported). >>> enp179s0f0 -> sw1 >>> enp179s0f1 -> sw2 >>> >>> vlan 323 is used for ‘public network’ >>> vlan 324 is used for ‘cluster network’ >>> >>> My desired configuration is to create two bond interfaces in active-backup mode: >>> bond0 >>> - enp179s0f0.323 (active) >>> - enp179s0f1.323 (backup) >>> bond1 >>> - enp179s0f0.324 (backup) >>> - enp179s0f1.324 (active) >>> >>> This way, the public network will use switch1, and the cluster network will use switch2, under normal operation. >>> >>> I am, however, having an issue implementing this configuration in Ubuntu 18.04 with netplan (see configuration at the end of this post). >>> >>> When I reboot a node with the below netplan configuration, the bond interface is created, but the vlan interfaces are not added to the bond. >>> >>> I see the following errors in the log: >>> systemd-networkd[1641]: enp179s0f0.323: Enslaving by 'bond0’ >>> systemd-networkd[1641]: bond0: Enslaving link 'enp179s0f0.323’ >>> systemd-networkd[1641]: enp179s0f1.323: Enslaving by 'bond0’ >>> systemd-networkd[1641]: bond0: Enslaving link 'enp179s0f1.323’ >>> systemd-networkd[1643]: enp179s0f1.323: Could not join netdev: Operation not permitted >>> systemd-networkd[1643]: enp179s0f1.323: Failed >>> systemd-networkd[1643]: enp179s0f0.323: Could not join netdev: Operation not permitted >>> systemd-networkd[1643]: enp179s0f0.323: Failed >>> >>> If I manually run ’systemctl restart systemd-networkd’ after boot has completed, then the bond is successfully created with the vlan interfaces. >>> >>> Does anybody have a similar configuration working specifically with netplan/networkd? Could you please share your configuration? >>> >>> Netplan config that doesn’t work at boot time: >>> >>> network: >>> version: 2 >>> renderer: networkd >>> ethernets: >>> enp179s0f0: {} >>> enp179s0f1: {} >>> >>> bonds: >>> bond0: >>> dhcp4: false >>> dhcp6: false >>> interfaces: >>> - enp179s0f0.323 >>> - enp179s0f1.323 >>> parameters: >>> mode: active-backup >>> primary: enp179s0f0.323 >>> mii-monitor-interval: 1 >>> addresses: [insert address here] >>> bond1: >>> dhcp4: false >>> dhcp6: false >>> interfaces: >>> - enp179s0f0.324 >>> - enp179s0f1.324 >>> parameters: >>> mode: active-backup >>> primary: enp179s0f1.324 >>> mii-monitor-interval: 1 >>> addresses: [insert address here] >>> >>> vlans: >>> enp179s0f0.323: >>> id: 323 >>> link: enp179s0f0 >>> enp179s0f1.323: >>> id: 323 >>> link: enp179s0f1 >>> enp179s0f0.324: >>> id: 324 >>> link: enp179s0f0 >>> enp179s0f1.324: >>> id: 324 >>> link: enp179s0f1 >>> >>> _______________________________________________ >>> ceph-users mailing list -- ceph-users@xxxxxxx >>> To unsubscribe send an email to ceph-users-leave@xxxxxxx >> _______________________________________________ >> ceph-users mailing list -- ceph-users@xxxxxxx >> To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx