Re: [PATCH 1/2] KVM: MMU: Mark sp mmio cached when creating mmio spte

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 03/14/2013 01:13 PM, Xiao Guangrong wrote:
> On 03/14/2013 09:58 AM, Marcelo Tosatti wrote:
>> On Wed, Mar 13, 2013 at 10:05:20PM +0800, Xiao Guangrong wrote:
>>> On 03/13/2013 09:40 PM, Takuya Yoshikawa wrote:
>>>> On Wed, 13 Mar 2013 20:42:41 +0800
>>>> Xiao Guangrong <xiaoguangrong@xxxxxxxxxxxxxxxxxx> wrote:
>>>>
>>>>>>>>> How about save all mmio spte into a mmio-rmap?
>>>>>>>>
>>>>>>>> The problem is that other mmu code would need to care about the pointers
>>>>>>>> stored in the new rmap list: when mmu_shrink zaps shadow pages for example.
>>>>>>>
>>>>>>> It is not hard... all the codes have been wrapped by *zap_spte*.
>>>>>>>
>>>>>> So are you going to send a patch? What do you think about applying this
>>>>>> as temporary solution?
>>>>>
>>>>> Hi Gleb,
>>>>>
>>>>> Since it only needs small change based on this patch, I think we can directly
>>>>> apply the rmap-based way.
>>>>>
>>>>> Takuya, could you please do this? ;)
>>>>
>>>> Though I'm fine with my making the patch better, I'm still thinking
>>>> about the bad side of it, though.
>>>>
>>>> In zap_spte, don't we need to search the pointer to be removed from the
>>>> global mmio-rmap list?  How long can that list be?
>>>
>>> It is not bad. On softmmu, the rmap list has already been long more than 300.
>>> On hardmmu, normally the mmio spte is not frequently zapped (just set not clear).
>>>
>>> The worst case is zap-all-mmio-spte that removes all mmio-spte. This operation
>>> can be speed up after applying my previous patch:
>>> KVM: MMU: fast drop all spte on the pte_list
>>>
>>>>
>>>> Implementing it will/may not be difficult but I'm not sure if we would
>>>> get pure improvement.  Unless it becomes 99% sure, I think we should
>>>> first take a basic approach. 
>>>
>>> I definitely sure zapping all mmio-sptes is fast than zapping mmio shadow
>>> pages. ;)
>>
>> With a huge number of shadow pages (think 512GB guest, 262144 pte-level
>> shadow pages to map), it might be a problem.
> 
> That is one of the reasons why i think zap mmio shadow page is not good. ;)
> 
> This patch needs to walk all shadow pages to find all mmio shadow page out
> and zap them, it depends on how much memory is used on guest (huge memory
> causes huge shadow page as you said). But the time of zapping mmio spte is
> constant, no matter of memory used.
> 
>>
>>>> What do you think?
>>>
>>> I am considering if zap all shadow page is faster enough (after my patchset), do
>>> we really need to care it?
>>
>> Still needed: your patch reduces kvm_mmu_zap_all() time, but as you can
>> see with huge memory sized guests 100% improvement over the current
>> situation will be a bottleneck (and as you noted the deletion case is
>> still unsolved).	
> 
> The improvement can be greater if more memory is used. (I only used 2G memory in
> guest since my test case is 32bit program which can not use huge memory, and
> not lock contention in my testcase.)
> 
> Actually, the time complexity of current kvm_mmu_zap_all is the same as zap

                                    ^^^^^
Sorry, not current way. It is the optimizing way in my patchset.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux