Re: [PATCH 1/2] KVM: MMU: Mark sp mmio cached when creating mmio spte

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



[ I'm still reading your patches, so please forgive me If I'm wrong. ]

On Thu, 14 Mar 2013 13:13:30 +0800
Xiao Guangrong <xiaoguangrong@xxxxxxxxxxxxxxxxxx> wrote:

> Actually, the time complexity of current kvm_mmu_zap_all is the same as zap
> mmio shadow page in the mmu-lock (O(n), n is the number of shadow page table).
> Both of them walking all shadow page table.  The reset work of kvm_mmu_zap is
> constant.

Clearing rmap arrays, using memset, cannot be constant.
It's proportional to the number of guest pages (not shadow pages).
I guess we can think it's practically constant for all cases,
so I think your optimization is great!

But anyway it's worth remembering the arrays can be very long.
512GB: 128M pages.  Clearing 1GB of memory will not take too long(?)...
So my guess is that your method can cover most of the use cases we can
think of now.

Thanks,
	Takuya

> 
> And this is a TODO thing:
> (2): free shadow pages by using generation-number
> After that, kvm_mmu_zap needn't to walking all shadow pages anymore.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux