Re: [bug report] KVM: x86/mmu: Use an rwlock for the x86 MMU

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jul 26, 2021, Ben Gardon wrote:
> On Mon, Jul 26, 2021 at 12:52 AM Dan Carpenter <dan.carpenter@xxxxxxxxxx> wrote:
> >
> > [ This is not the correct patch to blame, but there is something going
> >   on here which I don't understand so this email is more about me
> >   learning rather than reporting bugs. - dan ]
> >
> > Hello Ben Gardon,
> >
> > The patch 531810caa9f4: "KVM: x86/mmu: Use an rwlock for the x86 MMU"
> > from Feb 2, 2021, leads to the following static checker warning:
> >
> >         arch/x86/kvm/mmu/mmu.c:5769 kvm_mmu_zap_all()
> >         warn: sleeping in atomic context
> >
> > arch/x86/kvm/mmu/mmu.c
> >     5756 void kvm_mmu_zap_all(struct kvm *kvm)
> >     5757 {
> >     5758        struct kvm_mmu_page *sp, *node;
> >     5759        LIST_HEAD(invalid_list);
> >     5760        int ign;
> >     5761
> >     5762        write_lock(&kvm->mmu_lock);
> >                 ^^^^^^^^^^^^^^^^^^^^^^^^^^
> > This line bumps the preempt count.
> >
> >     5763 restart:
> >     5764        list_for_each_entry_safe(sp, node, &kvm->arch.active_mmu_pages, link) {
> >     5765                if (WARN_ON(sp->role.invalid))
> >     5766                        continue;
> >     5767                if (__kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list, &ign))
> >     5768                        goto restart;
> > --> 5769                if (cond_resched_rwlock_write(&kvm->mmu_lock))
> >                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> > This line triggers a sleeping in atomic warning.  What's going on here
> > that I'm not understanding?
> 
> 
> Hi Dan,
> 
> Thanks for sending this. I'm confused by this sequence too. I'm not
> sure how this could sleep in an atomic context.
> My first thought was that there might be something going on with the
> qrwlock's wait_lock, but since this thread already acquired the
> rwlock, it can't be holding / waiting on the wait_lock.
> 
> Then I thought the __might_sleep could be in the wrong place, but it's
> in the same place for a regular spinlock, so I think that's fine.

The PREEMPT_LOCK_OFFSET parameter to __might_sleep()

  __might_sleep(__FILE__, __LINE__, PREEMPT_LOCK_OFFSET);	\

effectively tells it to exempt a single preemption count via preempt_count_equals()

  void ___might_sleep(const char *file, int line, int preempt_offset)
  {
	...

	if ((preempt_count_equals(preempt_offset) && !irqs_disabled() &&
	     !is_idle_task(current) && !current->non_block_count) ||
	    system_state == SYSTEM_BOOTING || system_state > SYSTEM_RUNNING ||
	    oops_in_progress)
		return;

	...
  }

which returns true if the preempt count equals the passed in offset.
PREEMPT_LOCK_OFFSET is just the vanilla preempt_disable() offset, which is why
there's no special preemption call in the lock/unlock paths.

  #define PREEMPT_LOCK_OFFSET	PREEMPT_DISABLE_OFFSET


Dan, is this coming from Smatch?  If so, is this by chance a new, in-progress
warning that has special code to handle cond_resched_lock()?  I couldn't find
any matches on "sleeping in atomic context" in Smatch.  The rwlock variants,
cond_resched_rwlock_{read,write}() were added specifically for KVM's TDP MMU,
maybe they snuck in after a waiver for cond_resched_lock() was added?



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux