Re: [PATCH 2/2] kvm: x86: reduce collisions in mmu_page_hash

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 58995fd9..de55653 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1713,7 +1713,7 @@ static void kvm_mmu_free_page(struct kvm_mmu_page *sp)

 static unsigned kvm_page_table_hashfn(gfn_t gfn)
 {
-	return gfn & ((1 << KVM_MMU_HASH_SHIFT) - 1);
+	return hash_64(gfn, KVM_MMU_HASH_SHIFT);
 }

 static void mmu_page_add_parent_pte(struct kvm_vcpu *vcpu,


hash_64() might be more expensive to calculate, as it involves one
multiplication. Not sure if that might be a problem.

Looks good to me!

--

David
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux