On 03/15/2010 08:24 AM, Joerg Roedel wrote:
On Mon, Mar 15, 2010 at 03:11:42PM +0200, Avi Kivity wrote:
On 03/15/2010 03:03 PM, Joerg Roedel wrote:
I will add another project - iommu emulation. Could be very useful
for doing device assignment to nested guests, which could make
testing a lot easier.
Our experiments show that nested device assignment is pretty much
required for I/O performance in nested scenarios.
Really? I did a small test with virtio-blk in a nested guest (disk read
with dd, so not a real benchmark) and got a reasonable read-performance
of around 25MB/s from the disk in the l2-guest.
Your guest wasn't doing a zillion VMREADs and VMWRITEs every exit.
I plan to reduce VMREAD/VMWRITE overhead for kvm, but not much we can do
for other guests.
Does it matter for the ept-on-ept case? The initial patchset of
nested-vmx implemented it and they reported a performance drop of around
12% between levels which is reasonable. So I expected the loss of
io-performance for l2 also reasonable in this case. My small measurement
was also done using npt-on-npt.
But that was something like kernbench IIRC which is actually exit light
once ept is enabled.
Network IO is typically exit heavy and becomes something more of a
pathological work load (both for nested ept and nested npt).
Regards,
Anthony Liguori
Joerg
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html