Re: [PATCH 4/5] KVM: Add hypercall queue for paravirt_ops implementation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Avi Kivity wrote:
Anthony Liguori wrote:
Avi Kivity wrote:
These numbers are pretty bad. I'd like to improve them, even without PV.

I agree. Do you know what's missing at this point? There isn't a whole lot of state saving going on for the light weight exit paths for SVM.

The SVM code doesn't even have a lightweight vmexit path.

Sure it does.  Quite a lot is deferred to vcpu_{load,put}.

Ah, I forgot.  Yes, the syscall msrs are deferred.


For every vmexit, it does the entire thing, including vmload/vmsave

I haven't had a lot of luck eliminating vmload/vmsave.


For x86_64, the only issue I see is with TR. Unfortunately, I don't see a way around it.


, fpu switch (if needed)

The FPU switch can really be avoided? Is it safe to assume that the KVM code isn't going to use any FPU operations?

Generally, kernel code does not use the fpu (when it does, it calls kernel_fpu_begin() and kernel_fpu_end()). The vmx code avoids the switch.

Of course, if the guest doesn't use the fpu, the switch is avoided anyway.


For kbuild vs. kernbench, I suspect that -j4 causes the shadow page table cache to thrash. 1024 pages may be enough for a single instance but not -j4. Hopefully replacing the eviction algorithm (currently FIFO) will help. Otherwise we'll need to resize the cache again.

I naively tried to bump it to 2048 and hit a kmalloc limitation.


struct kvm is 22K on x86_64. Adding 1024 pointers makes it 30K. What error did you get?

With an older kvm, on a different system, I was getting:

WARNING: "__you_cannot_kzalloc_that_much"

On the latest git though, I don't seem to get that warning on my development system even if I bump all the way up to 8192. I'll see what bumping to 2048 does to kernbench. 4MB is actually small compared to other hypervisors for a shadow page table cache (Xen defaults to 8mb) so we may see good results.

Regards,

Anthony Liguori

We should probably make the hashtable a pointer, and allocate vcpus separately as well.


_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linux-foundation.org/mailman/listinfo/virtualization

[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux