Re: [PATCH v4 21/30] KVM: x86/mmu: Zap invalidated roots via asynchronous worker

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Mar 05, 2022, Paolo Bonzini wrote:
> On 3/5/22 01:34, Sean Christopherson wrote:
> > On Fri, Mar 04, 2022, Paolo Bonzini wrote:
> > > On 3/4/22 17:02, Sean Christopherson wrote:
> > > > On Fri, Mar 04, 2022, Paolo Bonzini wrote:
> > > > > On 3/3/22 22:32, Sean Christopherson wrote:
> > > > > I didn't remove the paragraph from the commit message, but I think it's
> > > > > unnecessary now.  The workqueue is flushed in kvm_mmu_zap_all_fast() and
> > > > > kvm_mmu_uninit_tdp_mmu(), unlike the buggy patch, so it doesn't need to take
> > > > > a reference to the VM.
> > > > > 
> > > > > I think I don't even need to check kvm->users_count in the defunct root
> > > > > case, as long as kvm_mmu_uninit_tdp_mmu() flushes and destroys the workqueue
> > > > > before it checks that the lists are empty.
> > > > 
> > > > Yes, that should work.  IIRC, the WARN_ONs will tell us/you quite quickly if
> > > > we're wrong :-)  mmu_notifier_unregister() will call the "slow" kvm_mmu_zap_all()
> > > > and thus ensure all non-root pages zapped, but "leaking" a worker will trigger
> > > > the WARN_ON that there are no roots on the list.
> > > 
> > > Good, for the record these are the commit messages I have:
> 
> I'm seeing some hangs in ~50% of installation jobs, both Windows and Linux.
> I have not yet tried to reproduce outside the automated tests, or to bisect,
> but I'll try to push at least the first part of the series for 5.18.

Out of curiosity, what was the bug?  I see this got pushed to kvm/next.



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux