On Wed, Nov 27, 2024 at 09:43:54AM -0800, Guenter Roeck wrote: > Hi, > > On Fri, Oct 11, 2024 at 06:45:00PM +0000, Joe Damato wrote: > > Add a persistent NAPI config area for NAPI configuration to the core. > > Drivers opt-in to setting the persistent config for a NAPI by passing an > > index when calling netif_napi_add_config. > > > > napi_config is allocated in alloc_netdev_mqs, freed in free_netdev > > (after the NAPIs are deleted). > > > > Drivers which call netif_napi_add_config will have persistent per-NAPI > > settings: NAPI IDs, gro_flush_timeout, and defer_hard_irq settings. > > > > Per-NAPI settings are saved in napi_disable and restored in napi_enable. > > > > Co-developed-by: Martin Karsten <mkarsten@xxxxxxxxxxxx> > > Signed-off-by: Martin Karsten <mkarsten@xxxxxxxxxxxx> > > Signed-off-by: Joe Damato <jdamato@xxxxxxxxxx> > > Reviewed-by: Jakub Kicinski <kuba@xxxxxxxxxx> > > This patch triggers a lock inversion message on pcnet Ethernet adapters. Thanks for the report. I am not familiar with the pcnet driver, but took some time now to read the report below and the driver code. I could definitely be reading the output incorrectly (if so please let me know), but it seems like the issue can be triggered in this case: CPU 0: pcnet32_open lock(lp->lock) napi_enable napi_hash_add lock(napi_hash_lock) unlock(napi_hash_lock) unlock(lp->lock) Meanwhile on CPU 1: pcnet32_close napi_disable napi_hash_del lock(napi_hash_lock) unlock(napi_hash_lock) lock(lp->lock) [... other code ...] unlock(lp->lock) [... other code ...] lock(lp->lock) [... other code ...] unlock(lp->lock) In other words: while the close path is holding napi_hash_lock (and before it acquires lp->lock), the enable path takes lp->lock and then napi_hash_lock. It seems this was triggered because before the identified commit, napi_enable did not call napi_hash_add (and thus did not take the napi_hash_lock). So, I agree there is an inversion; I can't say for sure if this would cause a deadlock in certain situations. It seems like napi_hash_del in the close path will return, so the inversion doesn't seem like it'd lead to a deadlock, but I am not an expert in this and could certainly be wrong. I wonder if a potential fix for this would be in the driver's close function? In pcnet32_open the order is: lock(lp->lock) napi_enable netif_start_queue mod_timer(watchdog) unlock(lp->lock) Perhaps pcnet32_close should be the same? I've included an example patch below for pcnet32_close and I've CC'd the maintainer of pcnet32 that is not currently CC'd. Guenter: Is there any change you might be able to test the proposed patch below? Don: Would you mind taking a look to see if this change is sensible? Netdev maintainers: at a higher level, I'm not sure how many other drivers might have locking patterns like this that commit 86e25f40aa1e ("net: napi: Add napi_config") will break in a similar manner. Do I: - comb through drivers trying to identify these, and/or - do we find a way to implement the identified commit with the original lock ordering to avoid breaking any other driver? I'd appreciate guidance/insight from the maintainers on how to best proceed. diff --git a/drivers/net/ethernet/amd/pcnet32.c b/drivers/net/ethernet/amd/pcnet32.c index 72db9f9e7bee..ff56a308fec9 100644 --- a/drivers/net/ethernet/amd/pcnet32.c +++ b/drivers/net/ethernet/amd/pcnet32.c @@ -2623,13 +2623,13 @@ static int pcnet32_close(struct net_device *dev) struct pcnet32_private *lp = netdev_priv(dev); unsigned long flags; + spin_lock_irqsave(&lp->lock, flags); + del_timer_sync(&lp->watchdog_timer); netif_stop_queue(dev); napi_disable(&lp->napi); - spin_lock_irqsave(&lp->lock, flags); - dev->stats.rx_missed_errors = lp->a->read_csr(ioaddr, 112); netif_printk(lp, ifdown, KERN_DEBUG, dev,