On 6/14/21 1:33 PM, Vlastimil Babka wrote: > On 6/14/21 1:16 PM, Sebastian Andrzej Siewior wrote: > But now that I dig into this in detail, I can see there might be another > instance of this imbalance bug, if CONFIG_PREEMPTION is disabled, but > CONFIG_PREEMPT_COUNT is enabled, which seems to be possible in some debug > scenarios. Because then preempt_disable()/preempt_enable() still manipulate the > preempt counter and compiling them out in __slab_alloc() will cause imbalance. > > So I think the guards in __slab_alloc() should be using CONFIG_PREEMPT_COUNT > instead of CONFIG_PREEMPT to be correct on all configs. I dare not remove them > completely :) Yep, it's possible to get such scenario with PREEMPT_VOLUNTARY plus PROVE_LOCKING - CONFIG_PREEMPTION is disabled but CONFIG_PREEMPT_COUNT is enabled, and RCU then complains in the page allocator due to the unpaired preempt_disable() before entering it. I've pushed a new branch revision with this fixed: https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slub-local-lock-v2r3