Re: [RFC PATCH 00/28] kvm: mmu: Rework the x86 TDP direct mapped case

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Switching to a RW lock is easy, but nothing would be able to use the
read lock because it's not safe to make most kinds of changes to PTEs
in parallel in the existing code. If we sharded the spinlock based on
GFN it might be easier, but that would also take a lot of
re-engineering.

On Fri, Dec 6, 2019 at 11:57 AM Sean Christopherson
<sean.j.christopherson@xxxxxxxxx> wrote:
>
> On Fri, Dec 06, 2019 at 11:55:42AM -0800, Ben Gardon wrote:
> > I'm finally back in the office. Sorry for not getting back to you sooner.
> > I don't think it would be easy to send the synchronization changes
> > first. The reason they seem so small is that they're all handled by
> > the iterator. If we tried to put the synchronization changes in
> > without the iterator we'd have to 1.) deal with struct kvm_mmu_pages,
> > 2.) deal with the rmap, and 3.) change a huge amount of code to insert
> > the synchronization changes into the existing framework. The changes
> > wouldn't be mechanical or easy to insert either since a lot of
> > bookkeeping is currently done before PTEs are updated, with no
> > facility for rolling back the bookkeeping on PTE cmpxchg failure. We
> > could start with the iterator changes and then do the synchronization
> > changes, but the other way around would be very difficult.
>
> By synchronization changes, I meant switching to a r/w lock instead of a
> straight spinlock.  Is that doable in a smallish series?



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux