On Tue, 2021-03-30 at 12:59 -0400, Paolo Bonzini wrote: > pvclock_gtod_sync_lock can be taken with interrupts disabled if the > preempt notifier calls get_kvmclock_ns to update the Xen > runstate information: > > spin_lock include/linux/spinlock.h:354 [inline] > get_kvmclock_ns+0x25/0x390 arch/x86/kvm/x86.c:2587 > kvm_xen_update_runstate+0x3d/0x2c0 arch/x86/kvm/xen.c:69 > kvm_xen_update_runstate_guest+0x74/0x320 arch/x86/kvm/xen.c:100 > kvm_xen_runstate_set_preempted arch/x86/kvm/xen.h:96 [inline] > kvm_arch_vcpu_put+0x2d8/0x5a0 arch/x86/kvm/x86.c:4062 > > So change the users of the spinlock to spin_lock_irqsave and > spin_unlock_irqrestore. Apologies, I didn't spot this at the time. Looks sane enough (if we ignore the elephant in the room that kvm_xen_update_runstate_guest() is also writing to userspace with interrupts disabled on this preempted code path, but I have a fix for that in the works¹). However, in 5.15-rc5 I'm still seeing the warning below when I run xen_shinfo_test. I confess I'm not entirely sure what it's telling me. [ 89.138354] ============================= [ 89.138356] [ BUG: Invalid wait context ] [ 89.138358] 5.15.0-rc5+ #834 Tainted: G S I E [ 89.138360] ----------------------------- [ 89.138361] xen_shinfo_test/2575 is trying to lock: [ 89.138363] ffffa34a0364efd8 (&kvm->arch.pvclock_gtod_sync_lock){....}-{3:3}, at: get_kvmclock_ns+0x1f/0x130 [kvm] [ 89.138442] other info that might help us debug this: [ 89.138444] context-{5:5} [ 89.138445] 4 locks held by xen_shinfo_test/2575: [ 89.138447] #0: ffff972bdc3b8108 (&vcpu->mutex){+.+.}-{4:4}, at: kvm_vcpu_ioctl+0x77/0x6f0 [kvm] [ 89.138483] #1: ffffa34a03662e90 (&kvm->srcu){....}-{0:0}, at: kvm_arch_vcpu_ioctl_run+0xdc/0x8b0 [kvm] [ 89.138526] #2: ffff97331fdbac98 (&rq->__lock){-.-.}-{2:2}, at: __schedule+0xff/0xbd0 [ 89.138534] #3: ffffa34a03662e90 (&kvm->srcu){....}-{0:0}, at: kvm_arch_vcpu_put+0x26/0x170 [kvm] [ 89.138576] stack backtrace: [ 89.138577] CPU: 27 PID: 2575 Comm: xen_shinfo_test Tainted: G S I E 5.15.0-rc5+ #834 [ 89.138580] Hardware name: Intel Corporation S2600CW/S2600CW, BIOS SE5C610.86B.01.01.0008.021120151325 02/11/2015 [ 89.138582] Call Trace: [ 89.138585] dump_stack_lvl+0x6a/0x9a [ 89.138592] __lock_acquire.cold+0x2ac/0x2d5 [ 89.138597] ? __lock_acquire+0x578/0x1f80 [ 89.138604] lock_acquire+0xc0/0x2d0 [ 89.138608] ? get_kvmclock_ns+0x1f/0x130 [kvm] [ 89.138648] ? find_held_lock+0x2b/0x80 [ 89.138653] _raw_spin_lock_irqsave+0x48/0x60 [ 89.138656] ? get_kvmclock_ns+0x1f/0x130 [kvm] [ 89.138695] get_kvmclock_ns+0x1f/0x130 [kvm] [ 89.138734] kvm_xen_update_runstate+0x14/0x90 [kvm] [ 89.138783] kvm_xen_update_runstate_guest+0x15/0xd0 [kvm] [ 89.138830] kvm_arch_vcpu_put+0xe6/0x170 [kvm] [ 89.138870] kvm_sched_out+0x2f/0x40 [kvm] [ 89.138900] __schedule+0x5de/0xbd0 [ 89.138904] ? kvm_mmu_topup_memory_cache+0x21/0x70 [kvm] [ 89.138937] __cond_resched+0x34/0x50 [ 89.138941] kmem_cache_alloc+0x228/0x2e0 [ 89.138946] kvm_mmu_topup_memory_cache+0x21/0x70 [kvm] [ 89.138979] mmu_topup_memory_caches+0x1d/0x70 [kvm] [ 89.139024] kvm_mmu_load+0x2d/0x750 [kvm] [ 89.139070] ? kvm_cpu_has_extint+0x15/0x90 [kvm] [ 89.139113] ? kvm_cpu_has_injectable_intr+0xe/0x50 [kvm] [ 89.139155] vcpu_enter_guest+0xc77/0x1210 [kvm] [ 89.139195] ? kvm_arch_vcpu_ioctl_run+0x146/0x8b0 [kvm] [ 89.139235] kvm_arch_vcpu_ioctl_run+0x146/0x8b0 [kvm] [ 89.139274] kvm_vcpu_ioctl+0x279/0x6f0 [kvm] [ 89.139306] ? find_held_lock+0x2b/0x80 [ 89.139312] __x64_sys_ioctl+0x83/0xb0 [ 89.139316] do_syscall_64+0x3b/0x90 [ 89.139320] entry_SYSCALL_64_after_hwframe+0x44/0xae ¹ https://git.infradead.org/users/dwmw2/linux.git/commitdiff/ec22c08258
Attachment:
smime.p7s
Description: S/MIME cryptographic signature