Re: [RFC PATCH 2/3] KVM: arm64: Add fast path to handle permission relaxation during dirty logging

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 10 Jan 2022 21:04:40 +0000,
Jing Zhang <jingzhangos@xxxxxxxxxx> wrote:
> 
> To reduce MMU lock contention during dirty logging, all permission
> relaxation operations would be performed under read lock.
> 
> Signed-off-by: Jing Zhang <jingzhangos@xxxxxxxxxx>
> ---
>  arch/arm64/kvm/mmu.c | 50 ++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 50 insertions(+)
> 
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index cafd5813c949..dd1f43fba4b0 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1063,6 +1063,54 @@ static int sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn,
>  	return 0;
>  }
>  
> +static bool fast_mark_writable(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> +		struct kvm_memory_slot *memslot, unsigned long fault_status)
> +{
> +	int ret;
> +	bool writable;
> +	bool write_fault = kvm_is_write_fault(vcpu);
> +	gfn_t gfn = fault_ipa >> PAGE_SHIFT;
> +	kvm_pfn_t pfn;
> +	struct kvm *kvm = vcpu->kvm;
> +	bool logging_active = memslot_is_logging(memslot);
> +	unsigned long fault_level = kvm_vcpu_trap_get_fault_level(vcpu);
> +	unsigned long fault_granule;
> +
> +	fault_granule = 1UL << ARM64_HW_PGTABLE_LEVEL_SHIFT(fault_level);
> +
> +	/* Make sure the fault can be handled in the fast path.
> +	 * Only handle write permission fault on non-hugepage during dirty
> +	 * logging period.
> +	 */

Not the correct comment format.

> +	if (fault_status != FSC_PERM || fault_granule != PAGE_SIZE
> +			|| !logging_active || !write_fault)
> +		return false;

This is all reinventing the logic that already exists in
user_mem_abort(). I'm sympathetic to the effort not to bloat it even
more, but code duplication doesn't help either.

> +
> +
> +	/* Pin the pfn to make sure it couldn't be freed and be resued for
> +	 * another gfn.
> +	 */
> +	pfn = __gfn_to_pfn_memslot(memslot, gfn, true, NULL,
> +				   write_fault, &writable, NULL);
> +	if (is_error_pfn(pfn) || !writable)
> +		return false;

What happens if we hit a non-writable mapping? Don't we leak a page
reference?

> +
> +	read_lock(&kvm->mmu_lock);
> +	ret = kvm_pgtable_stage2_relax_perms(
> +			vcpu->arch.hw_mmu->pgt, fault_ipa, PAGE_HYP);

PAGE_HYP? Err... no. KVM_PGTABLE_PROT_RW, more likely. Yes, they
expand to the same thing, but you are not dealing with nVHE EL2 S1
page tables here.

> +
> +	if (!ret) {
> +		kvm_set_pfn_dirty(pfn);
> +		mark_page_dirty_in_slot(kvm, memslot, gfn);
> +	}
> +	read_unlock(&kvm->mmu_lock);
> +
> +	kvm_set_pfn_accessed(pfn);
> +	kvm_release_pfn_clean(pfn);
> +
> +	return true;
> +}
> +
>  static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  			  struct kvm_memory_slot *memslot, unsigned long hva,
>  			  unsigned long fault_status)
> @@ -1085,6 +1133,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  	enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R;
>  	struct kvm_pgtable *pgt;
>  
> +	if (fast_mark_writable(vcpu, fault_ipa, memslot, fault_status))
> +		return 0;
>  	fault_granule = 1UL << ARM64_HW_PGTABLE_LEVEL_SHIFT(fault_level);
>  	write_fault = kvm_is_write_fault(vcpu);
>  	exec_fault = kvm_vcpu_trap_is_exec_fault(vcpu);

You are bypassing all sort of checks that I want to keep. Please
integrate this in user_mem_abort instead of this side hack.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux