On 04/20/2012 06:11 PM, Andrew Morton wrote:
On Fri, 13 Apr 2012 15:38:41 -0700
Ying Han<yinghan@xxxxxxxxxx> wrote:
The mmu_shrink() is heavy by itself by iterating all kvms and holding
the kvm_lock. spotted the code w/ Rik during LSF, and it turns out we
don't need to call the shrinker if nothing to shrink.
@@ -3900,6 +3905,9 @@ static int mmu_shrink(struct shrinker *shrink, struct shrink_control *sc)
if (nr_to_scan == 0)
goto out;
+ if (!get_kvm_total_used_mmu_pages())
+ return 0;
+
Do we actually know that this patch helps anything? Any measurements? Is
kvm_total_used_mmu_pages==0 at all common?
On re-reading mmu.c, it looks like even with EPT or NPT,
we end up creating mmu pages for the nested page tables.
I have not had the time to look into it more, but it would
be nice to know if the patch has any effect at all.
--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html