Re: [PATCH 0/24] Nested VMX, v5

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Jul 11, 2010, Alexander Graf wrote about "Re: [PATCH 0/24] Nested VMX, v5":
> Thinking about this - it would be perfectly legal to split the VMCS into two separate structs, right? You could have one struct that you map directly into the guest, so modifications to that struct don't trap. Of course the l1 guest shouldn't be able to modify all fields of the VMCS, so you'd still keep a second struct around with shadow fields. While at it, also add a bitmap to store the dirtyness status of your fields in, if you need that.
> 
> That way a nesting aware guest could use a PV memory write instead of the (slow) instruction emulation. That should dramatically speed up nesting vmx.

Hi,

We already tried this idea, and described the results in our tech report
(see http://www.mulix.org/pubs/turtles/h-0282.pdf). 

We didn't do things quite as cleanly as you suggested - we didn't split the
structure and make only part of it available directly to the guest. Rather,
we  only did what we had to do to get the performance improvement: We modified
L1 to access the VMCS directly, assuming the nested's vmcs12 structure layout,
instead of calling vmread/vmwrite.

As you can see in the various benchmarks in section 4 (Evaluation) of the
report, the so-called PV vmread/vmwrite method had a noticable, though perhaps
not as dramatic as you hoped, effect. For example, for the kernbench benchmark,
nested kvm overhead (over single-level kvm virtualization) came down from
14.5% to 10.3%, and for the specjbb benchmark, the overhead came down from
7.8% to 6.3%. In a microbenchmark less representative of real-life workloads,
we were able to measure a halving of the overhead by adding the PV
vmread/vmwrite.

In any case, the obvious problem with this whole idea on VMX is that it
requires a modified guest hypervisor, which reduces its usefulness.
This is why we didn't think we should "advertise" the ability to bypass
vmread/vmwrite in L1 and write directly to the vmcs12's. But Avi Kivity
already asked me to add a document about the vmcs12 internal structure,
and once I've done that, I guess you can now consider it "fair" for nesting-
aware L1 guest hypervisors to actually use that internal structure to modify
vmcs12 directly, without vmread/vmwrite and exits.

By the way, I see on the KVM Forum 2010 schedule that Eddie Dong will be
talking about "Examining KVM as Nested Virtualization Friendly Guest".
I'm looking forward to reading the proceedings (unfortunately, I won't be
able to travel to the actual meeting).

Nadav.


-- 
Nadav Har'El                        |      Sunday, Jul 11 2010, 29 Tammuz 5770
nyh@xxxxxxxxxxxxxxxxxxx             |-----------------------------------------
Phone +972-523-790466, ICQ 13349191 |I used to work in a pickle factory, until
http://nadav.harel.org.il           |I got canned.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux