On Sat, Apr 14, 2012 at 6:38 AM, Ying Han <yinghan@xxxxxxxxxx> wrote: > The mmu_shrink() is heavy by itself by iterating all kvms and holding > the kvm_lock. spotted the code w/ Rik during LSF, and it turns out we > don't need to call the shrinker if nothing to shrink. > > Signed-off-by: Ying Han <yinghan@xxxxxxxxxx> > --- > arch/x86/kvm/mmu.c | 10 +++++++++- > 1 files changed, 9 insertions(+), 1 deletions(-) > > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > index 4cb1642..7025736 100644 > --- a/arch/x86/kvm/mmu.c > +++ b/arch/x86/kvm/mmu.c > @@ -188,6 +188,11 @@ static u64 __read_mostly shadow_mmio_mask; > > static void mmu_spte_set(u64 *sptep, u64 spte); > > +static inline int get_kvm_total_used_mmu_pages() > +{ > + return percpu_counter_read_positive(&kvm_total_used_mmu_pages); > +} > + > void kvm_mmu_set_mmio_spte_mask(u64 mmio_mask) > { > shadow_mmio_mask = mmio_mask; > @@ -3900,6 +3905,9 @@ static int mmu_shrink(struct shrinker *shrink, struct shrink_control *sc) > if (nr_to_scan == 0) > goto out; > > + if (!get_kvm_total_used_mmu_pages()) > + return 0; > + > raw_spin_lock(&kvm_lock); > > list_for_each_entry(kvm, &vm_list, vm_list) { > @@ -3926,7 +3934,7 @@ static int mmu_shrink(struct shrinker *shrink, struct shrink_control *sc) > raw_spin_unlock(&kvm_lock); > > out: > - return percpu_counter_read_positive(&kvm_total_used_mmu_pages); > + return get_kvm_total_used_mmu_pages(); > } > Just nitpick. If new helper not created, there is only one hunk needed. btw, make sense to check nr_to_scan while scanning vm_list, and bail out if it hits zero? Good Weekend -hd -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href