Hi, On Mon, Jun 14, 2010, Avi Kivity wrote about "Re: [PATCH 5/24] Introduce vmcs12: a VMCS structure for L1": > On 06/13/2010 03:25 PM, Nadav Har'El wrote: > >+#define VMCS12_REVISION 0x11e57ed0 > > > > Where did this number come from? It's not from real hardware, yes? Since obviously this wasn't self-explanatory, I added the explanation in a comment. > >+struct __attribute__ ((__packed__)) vmcs12 { > > > > __packed is a convenient define for this. Thanks, I wasn't aware of this macro. I'm using it now. > >+ /* According to the Intel spec, a VMCS region must start with the > >+ * following two fields. Then follow implementation-specific data. > >+ */ > >+ u32 revision_id; > >+ u32 abort; > >+}; > > > > Note that this structure becomes an ABI, it cannot change except in a > backward compatible way due to the need for live migration. So I'd like > a documentation patch that adds a description of the content to > Documentation/kvm/. It can be as simple as listing the structure > definition. I still have reservation about documenting the vmcs12 structure, because it is explicitly not meant to be accessible by guests, who should use the VMREAD/VMWRITE ABI which *is* documented (in the VMX spec). But since you prefer, I can easily copy the structure definition into Doucmentation/kvm. I'll add it as a separate patch. > >+static struct page *nested_get_page(struct kvm_vcpu *vcpu, u64 vmcs_addr) > >+{ > >+ struct page *vmcs_page = > >+ gfn_to_page(vcpu->kvm, vmcs_addr>> PAGE_SHIFT); > >+ > >+ if (is_error_page(vmcs_page)) { > >+ printk(KERN_ERR "%s error allocating page 0x%llx\n", > >+ __func__, vmcs_addr); > > > > Those printks can be used by a malicious guest to span the host logs. > Please wrap them with something that is conditional on a debug flag. > > I'm not sure what we need to do with vmcs that is not in RAM. It may > simplify things to return the error_page to the caller and set > KVM_REQ_TRIPLE_FAULT, so we don't have to deal with error handling later on. This is a very good point. The approach in the patches I sent was to pin the L1-specified VMCS page and map it every time it was needed, and unpin and unmap it immediately afterward. This indeed will not work correctly if called with interrupts disabled and for some reason the vmcs12 page is swapped out... I decided to reconsider this whole approach. In the new approach, we pin and kmap the guest vmcs12 page on VMPTRLD time (when sleeping is fine), and unpin and unmap it when a different VMPTRLD is done (or VMCLEAR, or cleanup). Whenever we want to use this vmcs12 in the code - we can do so without needing to swap in or map anything. The code should now handle the rare situation when the vmcs12 gets swapped out, and is cleaner (with almost 100 lines of ugly nested_map_current()/ nested_unmap_current() calls were eliminated). The only downside I see is that now when nested vmx is being used, a single page, the current vmcs of L1, is always pinned and kmaped. I believe that pinning and mapping one single page (no matter how many guests there are) is a small price to pay - we already spend more than that in other places (e.g., one vmcs02 page per . Does this sound reasonable to you? > >+ kvm_release_page_dirty(page); > > > > Do we always dirty the page? > > I guess it is no big deal even if we don't. In the previous patch, you're right - it's pretty easy to know in each case whether we modified the page or not. In the new patches, where the page is pinned for significantly longer durations, it will always get modified somehow so kvm_release_page_dirty() will always be the appropriate choice. Here's the new patch. I will send the changes in vmptrld and all the places that used to map/unmap the vmcs12 when I send new versions of the following patches. ------ Subject: [PATCH 5/24] Introduce vmcs12: a VMCS structure for L1 An implementation of VMX needs to define a VMCS structure. This structure is kept in guest memory, but is opaque to the guest (who can only read or write it with VMX instructions). This patch starts to define the VMCS structure which our nested VMX implementation will present to L1. We call it "vmcs12", as it is the VMCS that L1 keeps for its L2 guests. This patch also adds the notion (as required by the VMX spec) of the "current VMCS", and finally includes utility functions for mapping the guest-allocated VMCSs in host memory. Signed-off-by: Nadav Har'El <nyh@xxxxxxxxxx> --- --- .before/arch/x86/kvm/vmx.c 2010-06-22 15:57:45.000000000 +0300 +++ .after/arch/x86/kvm/vmx.c 2010-06-22 15:57:45.000000000 +0300 @@ -126,6 +126,34 @@ struct shared_msr_entry { }; /* + * struct vmcs12 describes the state that our guest hypervisor (L1) keeps for a + * single nested guest (L2), hence the name vmcs12. Any VMX implementation has + * a VMCS structure, and vmcs12 is our emulated VMX's VMCS. This structure is + * stored in guest memory specified by VMPTRLD, but is opaque to the guest, + * which must access it using VMREAD/VMWRITE/VMCLEAR instructions. More + * than one of these structures may exist, if L1 runs multiple L2 guests. + * nested_vmx_run() will use the data here to build a VMCS for the underlying + * hardware which will be used to run L2. + * This structure is packed in order to preserve the binary content after live + * migration. If there are changes in the content or layout, VMCS12_REVISION + * must be changed. + */ +struct __packed vmcs12 { + /* According to the Intel spec, a VMCS region must start with the + * following two fields. Then follow implementation-specific data. + */ + u32 revision_id; + u32 abort; +}; + +/* + * VMCS12_REVISION is an arbitrary id that should be changed if the content or + * layout of struct vmcs12 is changed. MSR_IA32_VMX_BASIC returns this id, and + * VMPTRLD verifies that the VMCS region that L1 is loading contains this id. + */ +#define VMCS12_REVISION 0x11e57ed0 + +/* * The nested_vmx structure is part of vcpu_vmx, and holds information we need * for correct emulation of VMX (i.e., nested VMX) on this vcpu. For example, * the current VMCS set by L1, a list of the VMCSs used to run the active @@ -134,6 +162,12 @@ struct shared_msr_entry { struct nested_vmx { /* Has the level1 guest done vmxon? */ bool vmxon; + + /* The guest-physical address of the current VMCS L1 keeps for L2 */ + gpa_t current_vmptr; + /* The host-usable pointer to the above */ + struct page *current_vmcs12_page; + struct vmcs12 *current_vmcs12; }; struct vcpu_vmx { @@ -197,6 +231,21 @@ static inline struct vcpu_vmx *to_vmx(st return container_of(vcpu, struct vcpu_vmx, vcpu); } +static struct page *nested_get_page(struct kvm_vcpu *vcpu, gpa_t addr) +{ + struct page *page = gfn_to_page(vcpu->kvm, addr >> PAGE_SHIFT); + if (is_error_page(page)) { + kvm_release_page_clean(page); + return NULL; + } + return page; +} + +static void nested_release_page(struct page *page) +{ + kvm_release_page_dirty(page); +} + static int init_rmode(struct kvm *kvm); static u64 construct_eptp(unsigned long root_hpa); static void kvm_cpu_vmxon(u64 addr); @@ -3464,6 +3513,11 @@ static int handle_vmoff(struct kvm_vcpu to_vmx(vcpu)->nested.vmxon = false; + if(to_vmx(vcpu)->nested.current_vmptr != -1ull){ + kunmap(to_vmx(vcpu)->nested.current_vmcs12_page); + nested_release_page(to_vmx(vcpu)->nested.current_vmcs12_page); + } + skip_emulated_instruction(vcpu); return 1; } @@ -4136,6 +4190,10 @@ static void vmx_free_vcpu(struct kvm_vcp struct vcpu_vmx *vmx = to_vmx(vcpu); free_vpid(vmx); + if (vmx->nested.vmxon && to_vmx(vcpu)->nested.current_vmptr != -1ull){ + kunmap(to_vmx(vcpu)->nested.current_vmcs12_page); + nested_release_page(to_vmx(vcpu)->nested.current_vmcs12_page); + } vmx_free_vmcs(vcpu); kfree(vmx->guest_msrs); kvm_vcpu_uninit(vcpu); @@ -4201,6 +4259,9 @@ static struct kvm_vcpu *vmx_create_vcpu( goto free_vmcs; } + vmx->nested.current_vmptr = -1ull; + vmx->nested.current_vmcs12 = NULL; + return &vmx->vcpu; free_vmcs: -- Nadav Har'El | Tuesday, Jun 22 2010, 10 Tammuz 5770 nyh@xxxxxxxxxxxxxxxxxxx |----------------------------------------- Phone +972-523-790466, ICQ 13349191 |Corduroy pillows - they're making http://nadav.harel.org.il |headlines! -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html