Gleb Natapov <gleb@xxxxxxxxxx> wrote on 17/04/2013 05:10:28 PM: > On Wed, Apr 17, 2013 at 02:53:10PM +0300, Abel Gordon wrote: > > Allocate a shadow vmcs used by the processor to shadow part of the fields > > stored in the software defined VMCS12 (let L1 access fields without causing > > exits). Note we keep a shadow vmcs only for the current vmcs12. > Once a vmcs12 > > becomes non-current, its shadow vmcs is released. > > > > > > Signed-off-by: Abel Gordon <abelg@xxxxxxxxxx> > > --- > > arch/x86/kvm/vmx.c | 15 +++++++++++++++ > > 1 file changed, 15 insertions(+) > > > > --- .before/arch/x86/kvm/vmx.c 2013-04-17 14:20:50.000000000 +0300 > > +++ .after/arch/x86/kvm/vmx.c 2013-04-17 14:20:50.000000000 +0300 > > @@ -355,6 +355,7 @@ struct nested_vmx { > > /* The host-usable pointer to the above */ > > struct page *current_vmcs12_page; > > struct vmcs12 *current_vmcs12; > > + struct vmcs *current_shadow_vmcs; > > > > /* vmcs02_list cache of VMCSs recently used to run L2 guests */ > > struct list_head vmcs02_pool; > > @@ -5980,6 +5981,7 @@ static int handle_vmptrld(struct kvm_vcp > > gva_t gva; > > gpa_t vmptr; > > struct x86_exception e; > > + struct vmcs *shadow_vmcs; > > > > if (!nested_vmx_check_permission(vcpu)) > > return 1; > > @@ -6026,6 +6028,19 @@ static int handle_vmptrld(struct kvm_vcp > > vmx->nested.current_vmptr = vmptr; > > vmx->nested.current_vmcs12 = new_vmcs12; > > vmx->nested.current_vmcs12_page = page; > > + if (enable_shadow_vmcs) { > > + shadow_vmcs = alloc_vmcs(); > Next patch frees vmx->nested.current_shadow_vmcs couple of lines above. > What about reusing previous page instead of allocation new one each > time? Yes, we could have a single shadow vmcs per L1 vcpu that is used to shadow multiple L2 vcpus. I preferred not to do that because I didn't want to share the same page (physical vmcs) for different vmcs12s. However, this is not an issues because we overwrite the shadowed fields every time we sync the content. It's your call. If you prefer to re-use, I'll send a new version that do that. Please confirm. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html