Re: [PATCH] KVM: MMU: Fix mmu_shrink() so that it can free mmu pages as intended

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jul 20, 2012 at 10:04:34AM +0900, Takuya Yoshikawa wrote:
> On Wed, 18 Jul 2012 17:52:46 -0300
> Marcelo Tosatti <mtosatti@xxxxxxxxxx> wrote:
> 
> > Can't understand, can you please expand more clearly? 
> 
> I think mmu pages are not worth freeing under usual memory pressure,
> especially when we have EPT/NPT on.
> 
> What's happening:
> shrink_slab() vainly calls mmu_shrink() with the default batch size 128,
> and mmu_shrink() takes a long time to zap mmu pages far fewer than the
> requested number, usually just frees one.  Sadly, KVM may recreate the
> page soon after that.
> 
> Since we set the seeks 10 times greater than the default, total_scan is
> very small and shrink_slab() just wastes time for freeing such small
> amount of may-be-reallocated-soon memory: I want it to use time for
> scanning other objects instead.
> 
> Actually the total amount of memory used for mmu pages is not huge in
> the case of EPT/NPT on: maybe smaller that that of rmap?

rmap size is a function of mmu pages, so mmu_shrink indirectly 
releases rmap also.

> So, it's clear that no one wants mmu pages to be freed as other objects.
> Sure, our seeks size prevents shrink_slab() from calling mmu_shrink()
> usually.  But what if administrators want to drop clean caches on the
> host?
> 
> Documentation/sysctl/vm.txt says:
>   Writing to this will cause the kernel to drop clean caches, dentries and
>   inodes from memory, causing that memory to become free.
> 
>   To free pagecache:
>           echo 1 > /proc/sys/vm/drop_caches
>   To free dentries and inodes:
>           echo 2 > /proc/sys/vm/drop_caches
>   To free pagecache, dentries and inodes:
>           echo 3 > /proc/sys/vm/drop_caches
> 
> I don't want mmu pages to be freed in such cases.

drop_caches should be used in special occasions. I would not worry
about it.

> So, how about stopping reporting/returning the total number of used
> mmu pages to shrink_slab()?
> 
> If we do so, it will think that there are not enough objects to get
> memory back from KVM.

No, its important to be able to release memory quickly in low memory
conditions.

I bet the reasoning behind current seeks value (10*default) is close to
arbitrary.

mmu_shrink can be smarter, by freeing pages which are less likely to
be used. IIRC Avi had some nice ideas for LRU-like schemes (search the
archives).

You can also consider the fact that freeing a higher level pagetable
frees all of its children (that is quite dumb actually, sequential
shrink passes should free only pages with no children).

> In the case of shadow paging, guests can do bad things to allocate
> enormous mmu pages, so we should report such exceeded numbers to
> shrink_slab() as freeable objects, not the total.

A guest idle for 2 months should not have its mmu pages in memory.

>   |--- needed ---|--- freeable under memory pressure ---|
> 
> We may be able to use n_max_mmu_pages for this: the shrinker tries
> to free mmu pages unless the number reaches the goal.
> 
> Thanks,
> 	Takuya
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux