We run this exact style of setup on our OSD ceph nodes (RH7 based). The one really _really_ silly thing we noticed is that the network interfaces tended to be brought up in alphabetical order no matter what. We needed our bond interfaces (frontnet and backnet) to come up after the physical vlan links (enp131s0f[0,1]) This was fine for frontnet because it came after "enp*" but backnet was an issue. We cheated and just renamed backnet to "zbacknet". The quick and dirty fix is rename your bond interfaces to something that starts alphabetically after "enp". -paul -- Paul Mezzanini Sr Systems Administrator / Engineer, Research Computing Information & Technology Services Finance & Administration Rochester Institute of Technology o:(585) 475-3245 | pfmeec@xxxxxxx CONFIDENTIALITY NOTE: The information transmitted, including attachments, is intended only for the person(s) or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon this information by persons or entities other than the intended recipient is prohibited. If you received this in error, please contact the sender and destroy any copies of this information. ------------------------ ________________________________________ From: James McEwan <james.mcewan@xxxxxxxxx> Sent: Tuesday, March 31, 2020 9:32 AM To: ceph-users@xxxxxxx Subject: Re: Netplan bonding configuration Hi, I am currently building a 10 node Ceph cluster, each OSD node has 2x 25 Gbit/s nics, and I have 2 TOR switches (mlag not supported). enp179s0f0 -> sw1 enp179s0f1 -> sw2 vlan 323 is used for ‘public network’ vlan 324 is used for ‘cluster network’ My desired configuration is to create two bond interfaces in active-backup mode: bond0 - enp179s0f0.323 (active) - enp179s0f1.323 (backup) bond1 - enp179s0f0.324 (backup) - enp179s0f1.324 (active) This way, the public network will use switch1, and the cluster network will use switch2, under normal operation. I am, however, having an issue implementing this configuration in Ubuntu 18.04 with netplan (see configuration at the end of this post). When I reboot a node with the below netplan configuration, the bond interface is created, but the vlan interfaces are not added to the bond. I see the following errors in the log: systemd-networkd[1641]: enp179s0f0.323: Enslaving by 'bond0’ systemd-networkd[1641]: bond0: Enslaving link 'enp179s0f0.323’ systemd-networkd[1641]: enp179s0f1.323: Enslaving by 'bond0’ systemd-networkd[1641]: bond0: Enslaving link 'enp179s0f1.323’ systemd-networkd[1643]: enp179s0f1.323: Could not join netdev: Operation not permitted systemd-networkd[1643]: enp179s0f1.323: Failed systemd-networkd[1643]: enp179s0f0.323: Could not join netdev: Operation not permitted systemd-networkd[1643]: enp179s0f0.323: Failed If I manually run ’systemctl restart systemd-networkd’ after boot has completed, then the bond is successfully created with the vlan interfaces. Does anybody have a similar configuration working specifically with netplan/networkd? Could you please share your configuration? Netplan config that doesn’t work at boot time: network: version: 2 renderer: networkd ethernets: enp179s0f0: {} enp179s0f1: {} bonds: bond0: dhcp4: false dhcp6: false interfaces: - enp179s0f0.323 - enp179s0f1.323 parameters: mode: active-backup primary: enp179s0f0.323 mii-monitor-interval: 1 addresses: [insert address here] bond1: dhcp4: false dhcp6: false interfaces: - enp179s0f0.324 - enp179s0f1.324 parameters: mode: active-backup primary: enp179s0f1.324 mii-monitor-interval: 1 addresses: [insert address here] vlans: enp179s0f0.323: id: 323 link: enp179s0f0 enp179s0f1.323: id: 323 link: enp179s0f1 enp179s0f0.324: id: 324 link: enp179s0f0 enp179s0f1.324: id: 324 link: enp179s0f1 _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx