Re: [PATCH 00/10] KVM/X86: Handle guest memory that does not have a struct page

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 21/02/2018 18:47, KarimAllah Ahmed wrote:
> For the most part, KVM can handle guest memory that does not have a struct
> page (i.e. not directly managed by the kernel). However, There are a few places
> in the code, specially in the nested code, that does not support that.
> 
> Patch 1, 2, and 3 avoid the mapping and unmapping all together and just
> directly use kvm_guest_read and kvm_guest_write.
> 
> Patch 4 introduces a new guest mapping interface that encapsulate all the
> bioler plate code that is needed to map and unmap guest memory. It also
> supports guest memory without "struct page".
> 
> Patch 5, 6, 7, 8, 9, and 10 switch most of the offending code in VMX and hyperv
> to use the new guest mapping API.
> 
> This patch series is the first set of fixes. Handling SVM and APIC-access page
> will be handled in a different patch series.

I like the patches and the new API.  However, I'm a bit less convinced
about the caching aspect; keeping a page pinned is not the nicest thing
with respect (for example) to memory hot-unplug.

Since you're basically reinventing kmap_high, or alternatively
(depending on your background) xc_map_foreign_pages, it's not surprising
that memremap is slow.  How slow is it really (as seen e.g. with
vmexit.flat running in L1, on EC2 compared to vanilla KVM)?

Perhaps you can keep some kind of per-CPU cache of the last N remapped
pfns?  This cache would sit between memremap and __kvm_map_gfn and it
would be completely transparent to the layer below since it takes raw
pfns.  This removes the need to store the memslots generation etc.  (If
you go this way please place it in virt/kvm/pfncache.[ch], since
kvm_main.c is already way too big).

Thanks,

Paolo

> KarimAllah Ahmed (10):
>   X86/nVMX: handle_vmon: Read 4 bytes from guest memory instead of
>     map->read->unmap sequence
>   X86/nVMX: handle_vmptrld: Copy the VMCS12 directly from guest memory
>     instead of map->copy->unmap sequence.
>   X86/nVMX: Update the PML table without mapping and unmapping the page
>   KVM: Introduce a new guest mapping API
>   KVM/nVMX: Use kvm_vcpu_map when mapping the L1 MSR bitmap
>   KVM/nVMX: Use kvm_vcpu_map when mapping the virtual APIC page
>   KVM/nVMX: Use kvm_vcpu_map when mapping the posted interrupt
>     descriptor table
>   KVM/X86: Use kvm_vcpu_map in emulator_cmpxchg_emulated
>   KVM/X86: hyperv: Use kvm_vcpu_map in synic_clear_sint_msg_pending
>   KVM/X86: hyperv: Use kvm_vcpu_map in synic_deliver_msg
> 
>  arch/x86/kvm/hyperv.c    |  28 ++++-----
>  arch/x86/kvm/vmx.c       | 144 +++++++++++++++--------------------------------
>  arch/x86/kvm/x86.c       |  13 ++---
>  include/linux/kvm_host.h |  15 +++++
>  virt/kvm/kvm_main.c      |  50 ++++++++++++++++
>  5 files changed, 129 insertions(+), 121 deletions(-)
> 




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux