Re: [PATCH] KVM: x86: Avoid zapping mmio sptes twice for generation wraparound

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Il 03/07/2013 10:39, Xiao Guangrong ha scritto:
> On 07/03/2013 04:28 PM, Paolo Bonzini wrote:
>> Il 03/07/2013 10:18, Takuya Yoshikawa ha scritto:
>>> Since kvm_arch_prepare_memory_region() is called right after installing
>>> the slot marked invalid, wraparound checking should be there to avoid
>>> zapping mmio sptes when mmio generation is still MMIO_MAX_GEN - 1.
>>>
>>> Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@xxxxxxxxxxxxx>
>>> ---
>>>  This seems to be the simplest solution for fixing the off-by-one issue
>>>  we discussed before.
>>>
>>>  arch/x86/kvm/mmu.c |    5 +----
>>>  arch/x86/kvm/x86.c |    7 +++++++
>>>  2 files changed, 8 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
>>> index 0d094da..bf7af1e 100644
>>> --- a/arch/x86/kvm/mmu.c
>>> +++ b/arch/x86/kvm/mmu.c
>>> @@ -4383,11 +4383,8 @@ void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm)
>>>  	/*
>>>  	 * The very rare case: if the generation-number is round,
>>>  	 * zap all shadow pages.
>>> -	 *
>>> -	 * The max value is MMIO_MAX_GEN - 1 since it is not called
>>> -	 * when mark memslot invalid.
>>>  	 */
>>> -	if (unlikely(kvm_current_mmio_generation(kvm) >= (MMIO_MAX_GEN - 1))) {
>>> +	if (unlikely(kvm_current_mmio_generation(kvm) >= MMIO_MAX_GEN)) {
>>>  		printk_ratelimited(KERN_INFO "kvm: zapping shadow pages for mmio generation wraparound\n");
>>>  		kvm_mmu_invalidate_zap_all_pages(kvm);
>>>  	}
>>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>>> index 7d71c0f..9ddd4ff 100644
>>> --- a/arch/x86/kvm/x86.c
>>> +++ b/arch/x86/kvm/x86.c
>>> @@ -7046,6 +7046,13 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
>>>  		memslot->userspace_addr = userspace_addr;
>>>  	}
>>>  
>>> +	/*
>>> +	 * In these cases, slots->generation has been increased for marking the
>>> +	 * slot invalid, so we need wraparound checking here.
>>> +	 */
>>> +	if ((change == KVM_MR_DELETE) || (change == KVM_MR_MOVE))
>>> +		kvm_mmu_invalidate_mmio_sptes(kvm);
>>> +
>>>  	return 0;
>>>  }
>>>  
>>>
>>
>> Applied, thanks.
> 
> Please wait a while. I can not understand it very clearly.

I'm only applying to queue anyway until Linus pulls.

> This conditional check will cause caching a overflow value into mmio spte.
> The simple case is that kvm adds new slots for many times, the mmio-gen is easily
> more than MMIO_MAX_GEN.

The mmio generation is masked to MMIO_GEN_MASK:

        return (kvm_memslots(kvm)->generation +
                      MMIO_MAX_GEN - 150) & MMIO_GEN_MASK;

What Takuya's patch does is basically "if __kvm_set_memory_region called
install_new_memslots, call kvm_mmu_invalidate_mmio_sptes".

kvm_arch_prepare_memory_region is preceded by install_new_memslots if
change is KVM_MR_DELETE or KVM_MR_MOVE.  kvm_arch_commit_memory_region
is always preceded by install_new_memslots.  So the logic in x86.c
matches the one in __kvm_set_memory_region.

With this change, each change to the regions is matched by a call to
kvm_mmu_invalidate_mmio_sptes, and there is no need to invalidate twice
before wraparound.

Paolo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux