On 11/27/24 12:00, Joe Damato wrote:
On Wed, Nov 27, 2024 at 10:51:16AM -0800, Joe Damato wrote:
On Wed, Nov 27, 2024 at 09:43:54AM -0800, Guenter Roeck wrote:
Hi,
On Fri, Oct 11, 2024 at 06:45:00PM +0000, Joe Damato wrote:
Add a persistent NAPI config area for NAPI configuration to the core.
Drivers opt-in to setting the persistent config for a NAPI by passing an
index when calling netif_napi_add_config.
napi_config is allocated in alloc_netdev_mqs, freed in free_netdev
(after the NAPIs are deleted).
Drivers which call netif_napi_add_config will have persistent per-NAPI
settings: NAPI IDs, gro_flush_timeout, and defer_hard_irq settings.
Per-NAPI settings are saved in napi_disable and restored in napi_enable.
Co-developed-by: Martin Karsten <mkarsten@xxxxxxxxxxxx>
Signed-off-by: Martin Karsten <mkarsten@xxxxxxxxxxxx>
Signed-off-by: Joe Damato <jdamato@xxxxxxxxxx>
Reviewed-by: Jakub Kicinski <kuba@xxxxxxxxxx>
This patch triggers a lock inversion message on pcnet Ethernet adapters.
Thanks for the report. I am not familiar with the pcnet driver, but
took some time now to read the report below and the driver code.
I could definitely be reading the output incorrectly (if so please
let me know), but it seems like the issue can be triggered in this
case:
Sorry, my apologies, I both:
- read the report incorrectly, and
- proposed a bad patch that would result in a deadlock :)
After re-reading it and running this by Martin (who is CC'd), the
inversion is actually:
CPU 0:
pcnet32_open
lock(lp->lock)
napi_enable
napi_hash_add <- before this executes, CPU 1 proceeds
lock(napi_hash_lock)
CPU 1:
pcnet32_close
napi_disable
napi_hash_del
lock(napi_hash_lock)
< INTERRUPT >
pcnet32_interrupt
lock(lp->lock)
This is now an inversion because:
CPU 0: holds lp->lock and is about to take napi_hash_lock
CPU 1: holds napi_hashlock and an IRQ firing on CPU 1 tries to take
lp->lock (which CPU 0 already holds)
Neither side can proceed:
- CPU 0 is stuck waiting for napi_hash_lock
- CPU 1 is stuck waiting for lp->lock
I think the below explanation is still correct as to why the
identified commit causes the issue:
It seems this was triggered because before the identified commit,
napi_enable did not call napi_hash_add (and thus did not take the
napi_hash_lock).
However, the previous patch I proposed for pcnet32 would also cause
a deadlock as the watchdog timer's function also needs lp->lock.
A corrected patch for pcnet32 can be found below.
Guenter: Sorry, would you mind testing the below instead of the
previous patch?
Don: Let me know what you think about the below?
Netdev maintainers, there is an alternate locking solution I can
propose as an RFC that might avoid this class of problem if this
sort of issue is more widespread than just pcnet32:
- add the NAPI to the hash in netif_napi_add_weight (instead of napi_enable)
- remove the NAPI from the hash in __netif_napi_del (instead of
napi_disable)
If changing the locking order in core is the desired route, than the
patch below should be unnecessary, but:
diff --git a/drivers/net/ethernet/amd/pcnet32.c b/drivers/net/ethernet/amd/pcnet32.c
index 72db9f9e7bee..2e0077e68883 100644
--- a/drivers/net/ethernet/amd/pcnet32.c
+++ b/drivers/net/ethernet/amd/pcnet32.c
@@ -2625,11 +2625,10 @@ static int pcnet32_close(struct net_device *dev)
del_timer_sync(&lp->watchdog_timer);
+ spin_lock_irqsave(&lp->lock, flags);
netif_stop_queue(dev);
napi_disable(&lp->napi);
That is what I did, actually. Problem with that is that napi_disable()
wants to be able to sleep, thus the above triggers:
BUG: sleeping function called from invalid context at net/core/dev.c:6775
in_atomic(): 1, irqs_disabled(): 1, non_block: 0, pid: 1817, name: ip
preempt_count: 1, expected: 0
2 locks held by ip/1817:
#0: ffffffff81ded990 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x22a/0x74c
#1: ff6000000498ccb0 (&lp->lock){-.-.}-{3:3}, at: pcnet32_close+0x40/0x126
irq event stamp: 3720
hardirqs last enabled at (3719): [<ffffffff80decaf4>] _raw_spin_unlock_irqrestore+0x54/0x62
hardirqs last disabled at (3720): [<ffffffff80dec8a2>] _raw_spin_lock_irqsave+0x5e/0x64
softirqs last enabled at (3712): [<ffffffff8001efca>] handle_softirqs+0x3e6/0x4a2
softirqs last disabled at (3631): [<ffffffff80ded6cc>] __do_softirq+0x12/0x1a
CPU: 0 UID: 0 PID: 1817 Comm: ip Tainted: G N 6.12.0-10313-g7d4050728c83-dirty #1
Tainted: [N]=TEST
Hardware name: riscv-virtio,qemu (DT)
Call Trace:
[<ffffffff80006d42>] dump_backtrace+0x1c/0x24
[<ffffffff80dc8d94>] show_stack+0x2c/0x38
[<ffffffff80de00b0>] dump_stack_lvl+0x74/0xac
[<ffffffff80de00fc>] dump_stack+0x14/0x1c
[<ffffffff8004da18>] __might_resched+0x23e/0x248
[<ffffffff8004da60>] __might_sleep+0x3e/0x62
[<ffffffff80b8d370>] napi_disable+0x24/0x10c
[<ffffffff809a06fe>] pcnet32_close+0x6c/0x126
Guenter