RE: [PATCH 4/4 v4] KVM: VMX: VMXON/VMXOFF usage changes.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Marcelo Tosatti wrote:
> On Tue, May 11, 2010 at 06:29:48PM +0800, Xu, Dongxiao wrote:
>> From: Dongxiao Xu <dongxiao.xu@xxxxxxxxx>
>> 
>> SDM suggests VMXON should be called before VMPTRLD, and VMXOFF
>> should be called after doing VMCLEAR.
>> 
>> Therefore in vmm coexistence case, we should firstly call VMXON
>> before any VMCS operation, and then call VMXOFF after the
>> operation is done.
>> 
>> Signed-off-by: Dongxiao Xu <dongxiao.xu@xxxxxxxxx>
>> ---
>>  arch/x86/kvm/vmx.c |   38 +++++++++++++++++++++++++++++++-------
>>  1 files changed, 31 insertions(+), 7 deletions(-)
>> 
>> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
>> index c536b9d..dbd47a7 100644
>> --- a/arch/x86/kvm/vmx.c
>> +++ b/arch/x86/kvm/vmx.c
>> @@ -168,6 +168,8 @@ static inline struct vcpu_vmx *to_vmx(struct
>> kvm_vcpu *vcpu) 
>> 
>>  static int init_rmode(struct kvm *kvm);
>>  static u64 construct_eptp(unsigned long root_hpa);
>> +static void kvm_cpu_vmxon(u64 addr);
>> +static void kvm_cpu_vmxoff(void);
>> 
>>  static DEFINE_PER_CPU(struct vmcs *, vmxarea);
>>  static DEFINE_PER_CPU(struct vmcs *, current_vmcs);
>> @@ -786,8 +788,11 @@ static void vmx_vcpu_load(struct kvm_vcpu
>>  	*vcpu, int cpu)  { struct vcpu_vmx *vmx = to_vmx(vcpu);
>>  	u64 tsc_this, delta, new_offset;
>> +	u64 phys_addr = __pa(per_cpu(vmxarea, cpu));
>> 
>> -	if (vmm_exclusive && vcpu->cpu != cpu)
>> +	if (!vmm_exclusive)
>> +		kvm_cpu_vmxon(phys_addr);
>> +	else if (vcpu->cpu != cpu)
>>  		vcpu_clear(vmx);
>> 
>>  	if (per_cpu(current_vmcs, cpu) != vmx->vmcs) {
>> @@ -833,8 +838,10 @@ static void vmx_vcpu_load(struct kvm_vcpu
>>  *vcpu, int cpu) static void vmx_vcpu_put(struct kvm_vcpu *vcpu)
>>  {
>>  	__vmx_load_host_state(to_vmx(vcpu));
>> -	if (!vmm_exclusive)
>> +	if (!vmm_exclusive) {
>>  		__vcpu_clear(to_vmx(vcpu));
>> +		kvm_cpu_vmxoff();
>> +	}
>>  }
>> 
>>  static void vmx_fpu_activate(struct kvm_vcpu *vcpu)
>> @@ -1257,9 +1264,11 @@ static int hardware_enable(void *garbage)
>>  		       FEATURE_CONTROL_LOCKED |
>>  		       FEATURE_CONTROL_VMXON_ENABLED);
>>  	write_cr4(read_cr4() | X86_CR4_VMXE); /* FIXME: not cpu hotplug
>> safe */ -	kvm_cpu_vmxon(phys_addr); 
>> 
>> -	ept_sync_global();
>> +	if (vmm_exclusive) {
>> +		kvm_cpu_vmxon(phys_addr);
>> +		ept_sync_global();
>> +	}
>> 
>>  	return 0;
> 
> The documentation recommends usage of INVEPT all-context after
> execution of VMXON and prior to execution of VMXOFF. Is it not
> necessary? 

After adding the patch, when vCPU is scheduled in a CPU, it will call
tlb_flush() to invalidate the EPT and VPID cache/tlb for the vCPU.
Therefore the correctness for KVM is guaranteed. 

Thanks,
Dongxiao--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux