On Wed, Nov 27, 2024 at 10:51:16AM -0800, Joe Damato wrote: > On Wed, Nov 27, 2024 at 09:43:54AM -0800, Guenter Roeck wrote: > > Hi, > > > > On Fri, Oct 11, 2024 at 06:45:00PM +0000, Joe Damato wrote: > > > Add a persistent NAPI config area for NAPI configuration to the core. > > > Drivers opt-in to setting the persistent config for a NAPI by passing an > > > index when calling netif_napi_add_config. > > > > > > napi_config is allocated in alloc_netdev_mqs, freed in free_netdev > > > (after the NAPIs are deleted). > > > > > > Drivers which call netif_napi_add_config will have persistent per-NAPI > > > settings: NAPI IDs, gro_flush_timeout, and defer_hard_irq settings. > > > > > > Per-NAPI settings are saved in napi_disable and restored in napi_enable. > > > > > > Co-developed-by: Martin Karsten <mkarsten@xxxxxxxxxxxx> > > > Signed-off-by: Martin Karsten <mkarsten@xxxxxxxxxxxx> > > > Signed-off-by: Joe Damato <jdamato@xxxxxxxxxx> > > > Reviewed-by: Jakub Kicinski <kuba@xxxxxxxxxx> > > > > This patch triggers a lock inversion message on pcnet Ethernet adapters. > > Thanks for the report. I am not familiar with the pcnet driver, but > took some time now to read the report below and the driver code. > > I could definitely be reading the output incorrectly (if so please > let me know), but it seems like the issue can be triggered in this > case: Sorry, my apologies, I both: - read the report incorrectly, and - proposed a bad patch that would result in a deadlock :) After re-reading it and running this by Martin (who is CC'd), the inversion is actually: CPU 0: pcnet32_open lock(lp->lock) napi_enable napi_hash_add <- before this executes, CPU 1 proceeds lock(napi_hash_lock) CPU 1: pcnet32_close napi_disable napi_hash_del lock(napi_hash_lock) < INTERRUPT > pcnet32_interrupt lock(lp->lock) This is now an inversion because: CPU 0: holds lp->lock and is about to take napi_hash_lock CPU 1: holds napi_hashlock and an IRQ firing on CPU 1 tries to take lp->lock (which CPU 0 already holds) Neither side can proceed: - CPU 0 is stuck waiting for napi_hash_lock - CPU 1 is stuck waiting for lp->lock I think the below explanation is still correct as to why the identified commit causes the issue: > It seems this was triggered because before the identified commit, > napi_enable did not call napi_hash_add (and thus did not take the > napi_hash_lock). However, the previous patch I proposed for pcnet32 would also cause a deadlock as the watchdog timer's function also needs lp->lock. A corrected patch for pcnet32 can be found below. Guenter: Sorry, would you mind testing the below instead of the previous patch? Don: Let me know what you think about the below? Netdev maintainers, there is an alternate locking solution I can propose as an RFC that might avoid this class of problem if this sort of issue is more widespread than just pcnet32: - add the NAPI to the hash in netif_napi_add_weight (instead of napi_enable) - remove the NAPI from the hash in __netif_napi_del (instead of napi_disable) If changing the locking order in core is the desired route, than the patch below should be unnecessary, but: diff --git a/drivers/net/ethernet/amd/pcnet32.c b/drivers/net/ethernet/amd/pcnet32.c index 72db9f9e7bee..2e0077e68883 100644 --- a/drivers/net/ethernet/amd/pcnet32.c +++ b/drivers/net/ethernet/amd/pcnet32.c @@ -2625,11 +2625,10 @@ static int pcnet32_close(struct net_device *dev) del_timer_sync(&lp->watchdog_timer); + spin_lock_irqsave(&lp->lock, flags); netif_stop_queue(dev); napi_disable(&lp->napi); - spin_lock_irqsave(&lp->lock, flags); - dev->stats.rx_missed_errors = lp->a->read_csr(ioaddr, 112); netif_printk(lp, ifdown, KERN_DEBUG, dev,