On 12/06/2011 11:56 AM, Takuya Yoshikawa wrote: > Hi, I was looking at the same place differently. > > (2011/12/03 2:35), Jan Kiszka wrote: >> freed_pages is never evaluated, so remove it as well as the return code >> kvm_mmu_remove_some_alloc_mmu_pages so far delivered to its only user. >> >> Signed-off-by: Jan Kiszka<jan.kiszka@xxxxxxxxxxx> >> --- >> arch/x86/kvm/mmu.c | 12 ++++++------ >> 1 files changed, 6 insertions(+), 6 deletions(-) >> >> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c >> index b1178d1..efb8576 100644 >> --- a/arch/x86/kvm/mmu.c >> +++ b/arch/x86/kvm/mmu.c >> @@ -3898,14 +3898,14 @@ restart: >> spin_unlock(&kvm->mmu_lock); >> } >> >> -static int kvm_mmu_remove_some_alloc_mmu_pages(struct kvm *kvm, >> - struct list_head *invalid_list) >> +static void kvm_mmu_remove_some_alloc_mmu_pages(struct kvm *kvm, >> + struct list_head *invalid_list) >> { >> struct kvm_mmu_page *page; >> >> page = container_of(kvm->arch.active_mmu_pages.prev, >> struct kvm_mmu_page, link); >> - return kvm_mmu_prepare_zap_page(kvm, page, invalid_list); >> + kvm_mmu_prepare_zap_page(kvm, page, invalid_list); >> } >> >> static int mmu_shrink(struct shrinker *shrink, struct >> shrink_control *sc) >> @@ -3920,15 +3920,15 @@ static int mmu_shrink(struct shrinker >> *shrink, struct shrink_control *sc) >> raw_spin_lock(&kvm_lock); >> >> list_for_each_entry(kvm,&vm_list, vm_list) { >> - int idx, freed_pages; >> + int idx; >> LIST_HEAD(invalid_list); >> >> idx = srcu_read_lock(&kvm->srcu); >> spin_lock(&kvm->mmu_lock); >> if (!kvm_freed&& nr_to_scan> 0&& >> kvm->arch.n_used_mmu_pages> 0) { >> - freed_pages = kvm_mmu_remove_some_alloc_mmu_pages(kvm, >> - &invalid_list); >> + kvm_mmu_remove_some_alloc_mmu_pages(kvm, >> + &invalid_list); >> kvm_freed = kvm; >> } >> nr_to_scan--; > > I think mmu_shrink() is doing meaningless things. > > nr_to_scan should be treated as the number of objects to scan. Here, > the objects we > are trying to free is not kvm instances but shadow pages and their > related objects. > > So, decrementing it for each iteration is not at all what the caller > expected. > > Furthermore this code just frees from one VM and breaks the loop. So > nr_to_scan is > not functioning well. True, we should make kvm_mmu_remove_some_alloc_mmu_pages() manage nr_to_scan. Also have better LRU management. In practice the shrinker is caller rarely so the problems don't bite. > > > I was thinking how to improve this shrinker, but now, I also feel that > registering > mmu_shrink() might not worth it: it may free some memory in the case > of shadow paging, > but otherwise, there is little we can free by this. > > Is there any need for mmu_shrink()? Without it a user can easily pin large amounts of kernel memory by filling guest memory with page tables and shadowing them all. -- error compiling committee.c: too many arguments to function -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html