On Fri, Apr 20, 2012 at 3:53 PM, Rik van Riel <riel@xxxxxxxxxx> wrote: > On 04/20/2012 06:11 PM, Andrew Morton wrote: >> >> On Fri, 13 Apr 2012 15:38:41 -0700 >> Ying Han<yinghan@xxxxxxxxxx> wrote: >> >>> The mmu_shrink() is heavy by itself by iterating all kvms and holding >>> the kvm_lock. spotted the code w/ Rik during LSF, and it turns out we >>> don't need to call the shrinker if nothing to shrink. > > >>> @@ -3900,6 +3905,9 @@ static int mmu_shrink(struct shrinker *shrink, >>> struct shrink_control *sc) >>> if (nr_to_scan == 0) >>> goto out; >>> >>> + if (!get_kvm_total_used_mmu_pages()) >>> + return 0; >>> + > > >> Do we actually know that this patch helps anything? Any measurements? Is >> kvm_total_used_mmu_pages==0 at all common? >> > > On re-reading mmu.c, it looks like even with EPT or NPT, > we end up creating mmu pages for the nested page tables. I think you are right here. So the patch doesn't help the real pain. My understanding of the real pain is the poor implementation of the mmu_shrinker. It iterates all the registered mmu_shrink callbacks for each kvm and only does little work at a time while holding two big locks. I learned from mikew@ (also ++cc-ed) that is causing latency spikes and unfairness among kvm instance in some of the experiment we've seen. Mike might tell more on that. --Ying > > I have not had the time to look into it more, but it would > be nice to know if the patch has any effect at all. > > -- > All rights reversed -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href