Re: [PATCH v3 0/6] KVM: MMU: fast invalidate all mmio sptes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 06/19/2013 07:08 PM, Paolo Bonzini wrote:
> Il 10/06/2013 19:03, Gleb Natapov ha scritto:
>> On Mon, Jun 10, 2013 at 10:43:52PM +0900, Takuya Yoshikawa wrote:
>>> On Mon, 10 Jun 2013 16:39:37 +0800
>>> Xiao Guangrong <xiaoguangrong.eric@xxxxxxxxx> wrote:
>>>
>>>> On 06/10/2013 03:56 PM, Gleb Natapov wrote:
>>>>> On Fri, Jun 07, 2013 at 04:51:22PM +0800, Xiao Guangrong wrote:
>>>
>>>>> Looks good to me, but doesn't tis obsolete kvm_mmu_zap_mmio_sptes() and
>>>>> sp->mmio_cached, so they should be removed as part of the patch series?
>>>>
>>>> Yes, i agree, they should be removed. :)
>>>
>>> I'm fine with removing it but please make it clear that you all agree
>>> on the same basis.
>>>
>>> Last time, Paolo mentioned the possibility to use some bits of spte for
>>> other things.  The suggestion there was to keep sp->mmio_cached code
>>> for the time we would need to reduce the bits for generation numbers.
>>>
>>> Do you think that zap_all() is now preemptible and can treat the
>>> situation reasonably well as the current kvm_mmu_zap_mmio_sptes()?
>>>
>>> One downside is the need to zap unrelated shadow pages, but if this case
>>> is really very rare, yes I agree, it should not be a problem: it depends
>>> on how many bits we can use.
>>>
>>> Just please reconfirm.
>>>
>> That was me who mention the possibility to use some bits of spte for
>> other things. But for now I have a use for one bit only. Now that you
>> have reminded me about that discussion I am not so sure we want to
>> remove kvm_mmu_zap_mmio_sptes(), but on the other hand it is non
>> preemptable, so large number of mmio sptes can cause soft lockups.
>> zap_all() is better in this regards now.
> 
> I asked Gleb on IRC, and he's fine with applying patch 7 too (otherwise
> there's hardly any benefit, because kvm_mmu_zap_mmio_sptes is
> non-preemptable).
> 
> I'm also changing the -13 to -150 since it's quite easy to generate 150
> calls to KVM_SET_USER_MEMORY_REGION.  Using QEMU, and for a pretty basic
> guest with virtio-net, IDE controller and VGA you get:
> 
> - 9-10 calls before starting the guest, depending on the guest memory size
> 
> - around 25 during the BIOS
> 
> - around 20 during kernel boot
> 
> - 34 during a single dump of the 64 KB ROM from a virtio-net device.

Okay. The change is find to me. :)


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux