Re: [PATCH rebase/RFC 0/4] x86/kvm/nVMX: optimize MMU switch between L1 and L2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 31/07/2018 17:58, Vitaly Kuznetsov wrote:
> Thank you for the rebase,
> 
> it seems that with multi-root caching this series should just ignore CR3
> changes for both root_mmu and guest_mmu: we now have two separate
> 'prev_roots' caches and these work well. However, we still can optimize
> MMU re-initialization on L1->L2 and L2->L1 switches out using e.g. my
> 'scache' idea (which can be orthogonal to page_role check on CR3).

Indeed, though if possible the scache should be based on the role to
avoid duplicating code and data structures.

(Also I didn't quite have time to figure out _why_ without
root_mmu/guest_mmu there is still contention, it's probably something
trivial).

Paolo

> In my Hyper-V-on-KVM environment I'm seeing an additional 1000 CPU
> cycles win for a nested vmexit.
> 
> I'll pull things together and re-send the whole series.




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux