This is a note to let you know that I've just added the patch titled net: team: Unsync device addresses on ndo_stop to the 5.4-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: net-team-unsync-device-addresses-on-ndo_stop.patch and it can be found in the queue-5.4 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. commit e232d8530fd907a3cec1288dccd68df9eda15c59 Author: Benjamin Poirier <bpoirier@xxxxxxxxxx> Date: Wed Sep 7 16:56:41 2022 +0900 net: team: Unsync device addresses on ndo_stop [ Upstream commit bd60234222b2fd5573526da7bcd422801f271f5f ] Netdev drivers are expected to call dev_{uc,mc}_sync() in their ndo_set_rx_mode method and dev_{uc,mc}_unsync() in their ndo_stop method. This is mentioned in the kerneldoc for those dev_* functions. The team driver calls dev_{uc,mc}_unsync() during ndo_uninit instead of ndo_stop. This is ineffective because address lists (dev->{uc,mc}) have already been emptied in unregister_netdevice_many() before ndo_uninit is called. This mistake can result in addresses being leftover on former team ports after a team device has been deleted; see test_LAG_cleanup() in the last patch in this series. Add unsync calls at their expected location, team_close(). v3: * When adding or deleting a port, only sync/unsync addresses if the team device is up. In other cases, it is taken care of at the right time by ndo_open/ndo_set_rx_mode/ndo_stop. Fixes: 3d249d4ca7d0 ("net: introduce ethernet teaming device") Signed-off-by: Benjamin Poirier <bpoirier@xxxxxxxxxx> Signed-off-by: David S. Miller <davem@xxxxxxxxxxxxx> Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx> diff --git a/drivers/net/team/team.c b/drivers/net/team/team.c index 0eb894b7c0bd..da74ec778b6e 100644 --- a/drivers/net/team/team.c +++ b/drivers/net/team/team.c @@ -1270,10 +1270,12 @@ static int team_port_add(struct team *team, struct net_device *port_dev, } } - netif_addr_lock_bh(dev); - dev_uc_sync_multiple(port_dev, dev); - dev_mc_sync_multiple(port_dev, dev); - netif_addr_unlock_bh(dev); + if (dev->flags & IFF_UP) { + netif_addr_lock_bh(dev); + dev_uc_sync_multiple(port_dev, dev); + dev_mc_sync_multiple(port_dev, dev); + netif_addr_unlock_bh(dev); + } port->index = -1; list_add_tail_rcu(&port->list, &team->port_list); @@ -1344,8 +1346,10 @@ static int team_port_del(struct team *team, struct net_device *port_dev) netdev_rx_handler_unregister(port_dev); team_port_disable_netpoll(port); vlan_vids_del_by_dev(port_dev, dev); - dev_uc_unsync(port_dev, dev); - dev_mc_unsync(port_dev, dev); + if (dev->flags & IFF_UP) { + dev_uc_unsync(port_dev, dev); + dev_mc_unsync(port_dev, dev); + } dev_close(port_dev); team_port_leave(team, port); @@ -1694,6 +1698,14 @@ static int team_open(struct net_device *dev) static int team_close(struct net_device *dev) { + struct team *team = netdev_priv(dev); + struct team_port *port; + + list_for_each_entry(port, &team->port_list, list) { + dev_uc_unsync(port->dev, dev); + dev_mc_unsync(port->dev, dev); + } + return 0; }