Re: spinlock recursion when running q800 emulation in qemu

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Finn,

Am 10.03.2024 um 11:18 schrieb Finn Thain:
On Sun, 10 Mar 2024, Michael Schmitz wrote:

But I've now got this in ARAnyM:

BUG: spinlock recursion on CPU#0, pool_workqueue_/3
...

OK. I am unable to reproduce the BUG, unfortunately.

Looping over 178 boots (using init=/sbin/reboot), I see eight of the spinlock recursion messages in ARAnyM on my old PowerBook G4:

BUG: spinlock recursion on CPU#0, swapper/1
BUG: spinlock recursion on CPU#0, swapper/1
BUG: spinlock recursion on CPU#0, pool_workqueue_/3
BUG: spinlock recursion on CPU#0, swapper/2
BUG: spinlock recursion on CPU#0, pool_workqueue_/3
BUG: spinlock recursion on CPU#0, pool_workqueue_/3
BUG: spinlock recursion on CPU#0, swapper/2
BUG: spinlock recursion on CPU#0, pool_workqueue_/3

Trying the same on a much faster Intel system, no messages are seen. I'll try locking the PowerBook on half CPU clock rate next.

mfp_timer_c_hander() has a local_irq_save() / local_irq_restore() pair
around the legacy_timer_tick() invocation so this spinlock recursion
does appear to work even without reentering the scheduling timer routine


IIUC it is not spinlock usage that's at issue. IIUC the problem is either
the implementation of the locking primitives or the tests to verify their
properties.

The tests on unlocking certainly aren't atomic, but those are not the ones we see in the messages. The test on locking use READ_ONCE() so ought to be safe.

The locking primitives are not atomic at all, by design ('No atomicity anywhere, we are on UP'. While not debugging, spinlocks are NOPs on UP.)

I wonder whether CONFIG_DEBUG_SPINLOCK was ever meant to work at all on UP?

Cheers,

	Michael










[Index of Archives]     [Video for Linux]     [Yosemite News]     [Linux S/390]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux