Re: [RFC PATCH 3/6] KVM: x86/mmu: Pass the memslot around via struct kvm_page_fault

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Aug 13, 2021, David Matlack wrote:
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 3352312ab1c9..fb2c95e8df00 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -2890,7 +2890,7 @@ int kvm_mmu_max_mapping_level(struct kvm *kvm,
>  
>  void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
>  {
> -	struct kvm_memory_slot *slot;
> +	struct kvm_memory_slot *slot = fault->slot;
>  	kvm_pfn_t mask;
>  
>  	fault->huge_page_disallowed = fault->exec && fault->nx_huge_page_workaround_enabled;
> @@ -2901,8 +2901,7 @@ void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
>  	if (is_error_noslot_pfn(fault->pfn) || kvm_is_reserved_pfn(fault->pfn))
>  		return;
>  
> -	slot = gfn_to_memslot_dirty_bitmap(vcpu, fault->gfn, true);
> -	if (!slot)
> +	if (kvm_slot_dirty_track_enabled(slot))

This is unnecessarily obfuscated.  It relies on the is_error_noslot_pfn() to
ensure fault->slot is valid, but the only reason that helper is used is because
it was the most efficient code when slot wasn't available.  IMO, this would be
better:

	if (!slot || kvm_slot_dirty_track_enabled(slot))
		return;

	if (kvm_is_reserved_pfn(fault->pfn))
		return;

On a related topic, a good follow-up to this series would be to pass @fault into
the prefetch helpers, and modify the prefetch logic to re-use fault->slot and
refuse to prefetch across memslot boundaries.  That would eliminate all users of
gfn_to_memslot_dirty_bitmap() and allow us to drop that abomination.



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux