On 03/15/2010 10:06 AM, Avi Kivity wrote:
On 03/15/2010 03:23 PM, Anthony Liguori wrote:
On 03/15/2010 08:11 AM, Avi Kivity wrote:
On 03/15/2010 03:03 PM, Joerg Roedel wrote:
I will add another project - iommu emulation. Could be very useful
for doing device assignment to nested guests, which could make
testing a lot easier.
Our experiments show that nested device assignment is pretty much
required for I/O performance in nested scenarios.
Really? I did a small test with virtio-blk in a nested guest (disk
read
with dd, so not a real benchmark) and got a reasonable
read-performance
of around 25MB/s from the disk in the l2-guest.
Your guest wasn't doing a zillion VMREADs and VMWRITEs every exit.
I plan to reduce VMREAD/VMWRITE overhead for kvm, but not much we
can do for other guests.
VMREAD/VMWRITEs are generally optimized by hypervisors as they tend
to be costly. KVM is a bit unusual in terms of how many times the
instructions are executed per exit.
Do you know offhand of any unnecessary read/writes? There's
update_cr8_intercept(), but on normal exits, I don't see what else we
can remove.
Yeah, there are a number of examples.
vmcs_clear_bits() and vmcs_set_bits() read a field of the VMCS and then
immediately writes it. This is unnecessary as the same information
could be kept in a shadow variable. In vmx_fpu_activate, we call
vmcs_clear_bits() followed immediately by vmcs_set_bits(). which means
we're reading GUEST_CR0 twice and writing it twice.
vmx_get_rflags() reads from the VMCS and we frequently call get_rflags()
followed by a set_rflags() to update a bit. We also don't cache the
value between calls and there's a few spots in the code that make
multiple calls.
Regards,
Anthony Liguori
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html