Re: [PATCH 2/9] KVM: MMU: introduce slot_handle_level() and its helper

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 05/07/2015 08:04 PM, Paolo Bonzini wrote:


On 30/04/2015 12:24, guangrong.xiao@xxxxxxxxxxxxxxx wrote:
From: Xiao Guangrong <guangrong.xiao@xxxxxxxxxxxxxxx>

There are several places walking all rmaps for the memslot so that
introduce common functions to cleanup the code

Signed-off-by: Xiao Guangrong <guangrong.xiao@xxxxxxxxxxxxxxx>
---
  arch/x86/kvm/mmu.c | 63 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
  1 file changed, 63 insertions(+)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index ea3e3e4..75a3459 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -4410,6 +4410,69 @@ void kvm_mmu_setup(struct kvm_vcpu *vcpu)
  	init_kvm_mmu(vcpu);
  }

+/* The return value indicates if tlb flush on all vcpus is needed. */
+typedef bool (*slot_level_handler) (struct kvm *kvm, unsigned long *rmap);
+
+/* The caller should hold mmu-lock before calling this function. */
+static bool
+slot_handle_level(struct kvm *kvm, struct kvm_memory_slot *memslot,
+		  slot_level_handler fn, int min_level, int max_level,
+		  bool lock_flush_tlb)

Why not introduce for_each_slot_rmap first, instead of introducing one
implementation first and then switching to another?  It's a small
change to reorder the patches like that.

Yes, it's better, will do it in v2.

I think we should have three
iterator macros:

#define for_each_rmap_spte(rmap, iter, spte)

#define for_each_slot_rmap(slot, min_level, max_level, iter, rmapp)

#define for_each_slot_rmap_range(slot, iter, min_level, max_level, \
				 start_gfn, end_gfn, iter, rmapp)

where the last two take care of initializing the walker/iterator in the
first part of the "for".

Okay, i agree.


This way, this function would be introduced immediately as this very
readable code:

	struct slot_rmap_iterator iter;
	unsigned long *rmapp;
	bool flush = false;

	for_each_slot_rmap(memslot, min_level, max_level, &iter, rmapp) {
		if (*rmapp)
			flush |= fn(kvm, rmapp);

		if (need_resched() || spin_needbreak(&kvm->mmu_lock)) {
			if (flush && lock_flush_tlb) {
				kvm_flush_remote_tlbs(kvm);
				flush = false;
			}
			cond_resched_lock(&kvm->mmu_lock);
		}
	}

	/*
	 * What about adding this here: then callers that pass
	 * lock_flush_tlb == true need not care about the return
	 * value!
	 */
	if (flush && lock_flush_tlb) {
		kvm_flush_remote_tlbs(kvm);
		flush = false;
	}

	return flush;

Good idea.


In addition, some of these functions need to be marked always_inline I
think; either slot_handle_level/slot_handle_*_level, or the
iterators/walkers.  Can you collect kvm.ko size for both cases?

After applying patch 1 ~ 5:

no inline:
$ size arch/x86/kvm/kvm.ko
   text    data     bss     dec     hex filename
 366406   51535     473  418414   6626e arch/x86/kvm/kvm.ko

inline:
$ size arch/x86/kvm/kvm.ko
   text    data     bss     dec     hex filename
 366638   51535     473  418646   66356 arch/x86/kvm/kvm.ko

Since there are static functions i prefer allowing GCC automatically
optimizes the code to marking always-inline.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux