On 06/16/2010 06:25 PM, Dave Hansen wrote:
If mmu_shrink() has already done a significant amount of
scanning, the use of 'nr_to_scan' inside shrink_kvm_mmu()
will also ensure that we do not over-reclaim when we have
already done a lot of work in this call.
In the end, this patch defines a "scan" as:
1. An attempt to acquire a refcount on a 'struct kvm'
2. freeing a kvm mmu page
This would probably be most ideal if we can expose some
of the work done by kvm_mmu_remove_some_alloc_mmu_pages()
as also counting as scanning, but I think we have churned
enough for the moment.
It usually removes one page.
Does it always just go right now and free it, or is there any real
scanning that has to go on?
It picks a page from the tail of the LRU and frees it. There is very
little attempt to keep the LRU in LRU order, though.
We do need a scanner that looks at spte accessed bits if this isn't
going to result in performance losses.
diff -puN arch/x86/kvm/mmu.c~make-shrinker-more-aggressive arch/x86/kvm/mmu.c
--- linux-2.6.git/arch/x86/kvm/mmu.c~make-shrinker-more-aggressive 2010-06-14 11:30:44.000000000 -0700
+++ linux-2.6.git-dave/arch/x86/kvm/mmu.c 2010-06-14 11:38:04.000000000 -0700
@@ -2935,8 +2935,10 @@ static int shrink_kvm_mmu(struct kvm *kv
idx = srcu_read_lock(&kvm->srcu);
spin_lock(&kvm->mmu_lock);
- if (kvm->arch.n_used_mmu_pages> 0)
- freed_pages = kvm_mmu_remove_some_alloc_mmu_pages(kvm);
+ while (nr_to_scan> 0&& kvm->arch.n_used_mmu_pages> 0) {
+ freed_pages += kvm_mmu_remove_some_alloc_mmu_pages(kvm);
+ nr_to_scan--;
+ }
What tree are you patching?
These applied to Linus's latest as of yesterday.
Please patch against kvm.git master (or next, which is usually a few
unregression-tested patches ahead). This code has changed.
--
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html