Re: [PATCH 4/5] KVM: Add hypercall queue for paravirt_ops implementation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Anthony Liguori wrote:

I did, but using kbuild (a simple 'make' with defconfig), not kernbench. I get (elapsed time) 308 sec for kvm and 243 sec for native.

kernbench is a little different. It does a find over the kernel source tree to attempt to get as much of the kernel in the page cache as possible. It also uses -j4 by default.


These numbers are pretty bad. I'd like to improve them, even without PV.

I agree. Do you know what's missing at this point? There isn't a whole lot of state saving going on for the light weight exit paths for SVM.

The SVM code doesn't even have a lightweight vmexit path. For every vmexit, it does the entire thing, including vmload/vmsave, fpu switch (if needed), segment reloading, and msr reloading. It could use a lot of work.

For kbuild vs. kernbench, I suspect that -j4 causes the shadow page table cache to thrash. 1024 pages may be enough for a single instance but not -j4. Hopefully replacing the eviction algorithm (currently FIFO) will help. Otherwise we'll need to resize the cache again.

--
error compiling committee.c: too many arguments to function

_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linux-foundation.org/mailman/listinfo/virtualization

[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux