I'd like to revive the discussion and quickly summarize our options to avoid a false-positive Lockdep-RCU splat when check_and_switch_context() acquires the ASID spinlock: 1. Briefly report CPU as online again via rcutree_report_cpu_{starting|dead}() 2. Replace raw_spin_lock*() by arch_spin_lock() 3. Remove ASID spinlock Both 1. and 2. are workarounds with different pros and cons/proponents. Regarding 3., the last comment form Russell is: > For 32-bit non-LPAE, I can't recall any issues, nor can I think of any > because module space is just another few entries in the L1 page tables > below the direct mapping (which isn't a problem because we don't use > anything in hardware to separate the kernel space from user space in > the page tables.) TTBCR is set to 0. > > For LPAE, there may be issues there because TTBR0 and TTBR1 are both > used, and TTBCR.T1SZ is set non-zero to: > > arch/arm/include/asm/pgtable-3level-hwdef.h:#define TTBR1_SIZE (((PAGE_OFFSET >> 30) - 1) << 16) > > so I suspect that's where the problems may lie - but then module > mappings should also exist in init_mm (swapper_pg_dir) and should > be global. Unfortunately I'm don't feel qualified to contribute to the discussion on option 3. Russell and Will, would you be able to spare some time to drive this further? Otherwise I would propose to make a decision on going for option 1 or 2. Kind regards, Stefan