Re: [PATCH v2 0/7] KVM: MMU: fast zap all shadow pages

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Apr 18, 2013 at 11:01:18AM -0300, Marcelo Tosatti wrote:
> On Thu, Apr 18, 2013 at 12:42:39PM +0300, Gleb Natapov wrote:
> > > > that, but if not then less code is better.
> > > 
> > > The number of sp->role.invalid=1 pages is small (only shadow roots). It
> > > can grow but is bounded to a handful. No improvement visible there.
> > > 
> > > The number of shadow pages with old mmu_gen_number is potentially large.
> > > 
> > > Returning all shadow pages to the allocator is problematic because it
> > > takes a long time (therefore the suggestion to postpone it).
> > > 
> > > Spreading the work to free (or reuse) those shadow pages to individual
> > > page fault instances alleviates the mmu_lock hold time issue without
> > > significant reduction to post kvm_mmu_zap_all operation (which has to
> > > rebuilt all pagetables anyway).
> > > 
> > > You prefer to modify SLAB allocator to aggressively free these stale
> > > shadow pages rather than kvm_mmu_get_page to reuse them?
> > Are you saying that what makes kvm_mmu_zap_all() slow is that we return
> > all the shadow pages to the SLAB allocator? As far as I understand what
> > makes it slow is walking over huge number of shadow pages via various
> > lists, actually releasing them to the SLAB is not an issue, otherwise
> > the problem could have been solved by just moving
> > kvm_mmu_commit_zap_page() out of the mmu_lock. If there is measurable
> > SLAB overhead from not reusing the pages I am all for reusing them, but
> > is this really the case or just premature optimization?
> 
> Actually releasing them is not a problem. Walking all pages, lists and
> releasing in the process part of the problem ("returning them to the allocator"
> would have been clearer with "freeing them").
> 
> Point is at some point you have to walk all pages and release their data
> structures. With Xiaos scheme its possible to avoid this lengthy process
> by either:
> 
> 1) reusing the pages with stale generation number
> or
> 2) releasing them via the SLAB shrinker more aggressively
> 
But is it really so? The number of allocated shadow pages are limited
via n_max_mmu_pages mechanism, so I expect most freeing to happen in
make_mmu_pages_available() which is called during page fault so freeing
will be spread across page faults more or less equally. Doing
kvm_mmu_prepare_zap_page()/kvm_mmu_commit_zap_page() and zapping unknown
number of shadow pages during kvm_mmu_get_page() just to reuse one does
not sound like a clear win to me.

> (another typo, i meant "SLAB shrinker" not "SLAB allocator").
> 
> But you seem to be concerned for 1) due to code complexity issues?
> 
It adds code that looks to me redundant. I may be wrong of course, if
it is a demonstrable win I am all for it.

--
			Gleb.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux