On 27/09/2018 05:49, Tianyu Lan wrote: > This patch is to flush tlb directly in the kvm_handle_hva_range() > when range flush is available. > > Signed-off-by: Lan Tianyu <Tianyu.Lan@xxxxxxxxxxxxx> > --- > arch/x86/kvm/mmu.c | 7 +++++++ > 1 file changed, 7 insertions(+) > > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > index d10d8423e8d6..877edae0401f 100644 > --- a/arch/x86/kvm/mmu.c > +++ b/arch/x86/kvm/mmu.c > @@ -1888,6 +1888,13 @@ static int kvm_handle_hva_range(struct kvm *kvm, > &iterator) > ret |= handler(kvm, iterator.rmap, memslot, > iterator.gfn, iterator.level, data); > + > + if (ret && kvm_available_flush_tlb_with_range()) { > + kvm_flush_remote_tlbs_with_address(kvm, > + gfn_start, > + gfn_end - gfn_start); > + ret = 0; > + } > } > } > > Not all callers need a TLB flush (in particular kvm_test_age_hva) require a flush. My suggestion is to rewrite kvm_test_age_hva like this: index 4705a7f4169e..f72364a0ef9c 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -1898,12 +1898,13 @@ static int kvm_test_age_rmapp(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, int level, unsigned long data) { + bool *result = (bool *)data; u64 *sptep; struct rmap_iterator iter; for_each_rmap_spte(rmap_head, &iter, sptep) if (is_accessed_spte(*sptep)) - return 1; + *result = true; return 0; } @@ -1929,7 +1930,10 @@ int kvm_age_hva(struct kvm *kvm, int kvm_test_age_hva(struct kvm *kvm, unsigned long hva) { - return kvm_handle_hva(kvm, hva, 0, kvm_test_age_rmapp); + bool result = false; + kvm_handle_hva(kvm, hva, (unsigned long) &result, + kvm_test_age_rmapp); + return result; } #ifdef MMU_DEBUG and move the flush from kvm_set_pte_rmapp to kvm_mmu_notifier_change_pte, making kvm_set_spte_hva return an int; otherwise, it will flush twice. For non-x86 architectures just grep for "set_spte_hva" and make the various kvm_set_spte_hva implementations return false. Paolo