lockdep splat with gpc cache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Today I got the following lockdep splat:

----------
xen_shinfo_test/2726393 is trying to acquire lock:
ffff8885dc2818f0 (&gpc->lock){....}-{2:2}, at: kvm_xen_update_runstate_guest+0xcd/0x500 [kvm]

but task is already holding lock:
ffff888bc8803058 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x30/0x120

which lock already depends on the new lock.

3 locks held by xen_shinfo_test/2726393:
#0: ffff8885dc2800b8 (&vcpu->mutex){+.+.}-{3:3}, at: kvm_vcpu_ioctl+0x19c/0xc60 [kvm] #1: ffff888bc8803058 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x30/0x120 #2: ffffc900221f44c0 (&kvm->srcu){....}-{0:0}, at: kvm_arch_vcpu_put+0x9f/0x7e0 [kvm]

 __lock_acquire+0xb72/0x1870
 lock_acquire+0x1d8/0x5b0
 _raw_read_lock_irqsave+0x4f/0xb0
 kvm_xen_update_runstate_guest+0xcd/0x500 [kvm]
 kvm_arch_vcpu_put+0x48c/0x7e0 [kvm]
 kvm_sched_out+0xaf/0xf0 [kvm]
 prepare_task_switch+0x379/0xe20

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&rq->__lock);
                               lock(&p->pi_lock);
                               lock(&rq->__lock);
  lock(&gpc->lock);
------------------

As you can see from the above dump, the other lockdep chain is weird:

  &gpc->lock --> &p->pi_lock --> &rq->__lock

but I don't understand why p->pi_lock would be taken inside the gpc->lock critical section:

  -> #4 (&rq->__lock){-.-.}-{2:2}:
       __lock_acquire+0xb72/0x1870
       lock_acquire+0x1d8/0x5b0
       _raw_spin_lock_nested+0x3a/0x70
       raw_spin_rq_lock_nested+0x30/0x120
       task_fork_fair+0x6b/0x590
       sched_cgroup_fork+0x38b/0x590
       copy_process+0x2e5e/0x5290
       kernel_clone+0xba/0x890
       kernel_thread+0xae/0xe0
       rest_init+0x22/0x1f0
       arch_call_rest_init+0xf/0x15
       start_kernel+0x3d3/0x3f1
       secondary_startup_64_no_verify+0xd5/0xdb

  -> #3 (&p->pi_lock){-.-.}-{2:2}:
       __lock_acquire+0xb72/0x1870
       lock_acquire+0x1d8/0x5b0
       _raw_spin_lock_irqsave+0x43/0x90
       try_to_wake_up+0xb3/0xdd0
       create_worker+0x374/0x510
       workqueue_init+0x29f/0x343
       kernel_init_freeable+0x40e/0x53e
       kernel_init+0x19/0x140
       ret_from_fork+0x22/0x30

  -> #2 (&pool->lock){-.-.}-{2:2}:
       __lock_acquire+0xb72/0x1870
       lock_acquire+0x1d8/0x5b0
       _raw_spin_lock+0x34/0x80
       __queue_work+0x2a9/0xbb0
       queue_work_on+0x7b/0x90
       percpu_ref_put_many.constprop.0+0x16b/0x1a0
       uncharge_folio+0xf6/0x650
       __mem_cgroup_uncharge_list+0xb9/0x150
       release_pages+0x55e/0x1030
       __pagevec_lru_add+0x2f2/0x4f0
       folio_add_lru+0x326/0x550
       wp_page_copy+0x70d/0x10c0
       __handle_mm_fault+0xd9a/0x13d0
       handle_mm_fault+0x16b/0x5e0
       do_user_addr_fault+0x344/0xd80
       exc_page_fault+0x5a/0xe0
       asm_exc_page_fault+0x1e/0x30

  -> #1 (lock#6){+.+.}-{2:2}:
       __lock_acquire+0xb72/0x1870
       lock_acquire+0x1d8/0x5b0
       folio_mark_accessed+0x18a/0x770
       kvm_release_page_clean+0x1a4/0x240 [kvm]
       hva_to_pfn_retry+0x6d7/0x8e0 [kvm]
       kvm_gfn_to_pfn_cache_refresh+0x368/0xb90 [kvm]
       kvm_set_msr_common+0x9f4/0x26a0 [kvm]
       __kvm_set_msr+0xea/0x450 [kvm]
       kvm_emulate_wrmsr+0xb5/0x1a0 [kvm]
       vmx_handle_exit+0x15/0x140 [kvm_intel]
       vcpu_enter_guest+0x214a/0x3cc0 [kvm]
       vcpu_run+0xc5/0x950 [kvm]
       kvm_arch_vcpu_ioctl_run+0x326/0x10f0 [kvm]
       kvm_vcpu_ioctl+0x46a/0xc60 [kvm]
       __x64_sys_ioctl+0x127/0x190
       do_syscall_64+0x5c/0x80
       entry_SYSCALL_64_after_hwframe+0x44/0xae

  -> #0 (&gpc->lock){....}-{2:2}:
       __lock_acquire+0xb72/0x1870
       lock_acquire+0x1d8/0x5b0
       _raw_read_lock_irqsave+0x4f/0xb0
       kvm_xen_update_runstate_guest+0xcd/0x500 [kvm]
       kvm_arch_vcpu_put+0x48c/0x7e0 [kvm]
       kvm_sched_out+0xaf/0xf0 [kvm]
       prepare_task_switch+0x379/0xe20
       __schedule+0x3f7/0x1500
       schedule+0xe0/0x1f0
       xfer_to_guest_mode_handle_work+0xa8/0xe0
       vcpu_run+0x5f9/0x950 [kvm]
       kvm_arch_vcpu_ioctl_run+0x326/0x10f0 [kvm]
       kvm_vcpu_ioctl+0x46a/0xc60 [kvm]
       __x64_sys_ioctl+0x127/0x190
       do_syscall_64+0x5c/0x80
       entry_SYSCALL_64_after_hwframe+0x44/0xae

I'm about to disappear for a couple weeks, so I'll just throw this out and think about it while I am away.

Paolo




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux