Re: [PATCH 1/2] KVM: MMU: Mark sp mmio cached when creating mmio spte

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 13 Mar 2013 22:58:21 -0300
Marcelo Tosatti <mtosatti@xxxxxxxxxx> wrote:

> > > In zap_spte, don't we need to search the pointer to be removed from the
> > > global mmio-rmap list?  How long can that list be?
> > 
> > It is not bad. On softmmu, the rmap list has already been long more than 300.
> > On hardmmu, normally the mmio spte is not frequently zapped (just set not clear).

mmu_shrink() is an exception.

> > 
> > The worst case is zap-all-mmio-spte that removes all mmio-spte. This operation
> > can be speed up after applying my previous patch:
> > KVM: MMU: fast drop all spte on the pte_list

My point is other code may need to care more about latency.

Zapping all mmio sptes can happen only when changing memory regions:
not so latency severe but should be reasonably fast not to hold
mmu_lock for a (too) long time.

Compared to that, mmu_shrink() may be called any time and adding
more work to it should be avoided IMO.  It should return ASAP.

In general, we should try hard to keep ourselves from affecting
unrelated code path for optimizing something.  The global pte
list is something which can affect many code paths in the future.


So, I'm fine with trying mmio-rmap once we can actually measure
very long mmu_lock hold time by traversing shadow pages.

How about applying this first and then see the effect on big guests?

Thanks,
	Takuya


> > > Implementing it will/may not be difficult but I'm not sure if we would
> > > get pure improvement.  Unless it becomes 99% sure, I think we should
> > > first take a basic approach. 
> > 
> > I definitely sure zapping all mmio-sptes is fast than zapping mmio shadow
> > pages. ;)
> 
> With a huge number of shadow pages (think 512GB guest, 262144 pte-level
> shadow pages to map), it might be a problem.
> 
> > > What do you think?
> > 
> > I am considering if zap all shadow page is faster enough (after my patchset), do
> > we really need to care it?
> 
> Still needed: your patch reduces kvm_mmu_zap_all() time, but as you can
> see with huge memory sized guests 100% improvement over the current
> situation will be a bottleneck (and as you noted the deletion case is
> still unsolved).
> 
> Suppose another improvement angle is to zap only whats necessary for the
> given operation (say there is the memslot hint available, but unused for
> x86).
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux