Re: [PATCH v5 6/9] KVM: arm64: Refactor user_mem_abort() calculation of force_pte

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Monday 03 Mar 2025 at 17:10:10 (+0000), Fuad Tabba wrote:
> To simplify the code and to make the assumptions clearer,
> refactor user_mem_abort() by immediately setting force_pte to
> true if the conditions are met. Also, remove the comment about
> logging_active being guaranteed to never be true for VM_PFNMAP
> memslots, since it's not technically correct right now.
> 
> No functional change intended.
> 
> Signed-off-by: Fuad Tabba <tabba@xxxxxxxxxx>
> ---
>  arch/arm64/kvm/mmu.c | 13 ++++---------
>  1 file changed, 4 insertions(+), 9 deletions(-)
> 
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 1f55b0c7b11d..887ffa1f5b14 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1460,7 +1460,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  			  bool fault_is_perm)
>  {
>  	int ret = 0;
> -	bool write_fault, writable, force_pte = false;
> +	bool write_fault, writable;
>  	bool exec_fault, mte_allowed;
>  	bool device = false, vfio_allow_any_uc = false;
>  	unsigned long mmu_seq;
> @@ -1472,6 +1472,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  	gfn_t gfn;
>  	kvm_pfn_t pfn;
>  	bool logging_active = memslot_is_logging(memslot);
> +	bool force_pte = logging_active || is_protected_kvm_enabled();
>  	long vma_pagesize, fault_granule;
>  	enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R;
>  	struct kvm_pgtable *pgt;
> @@ -1521,16 +1522,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  		return -EFAULT;
>  	}
>  
> -	/*
> -	 * logging_active is guaranteed to never be true for VM_PFNMAP
> -	 * memslots.
> -	 */

Indeed, I tried to add the following snippeton top of upstream:

diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 1f55b0c7b11d..b5c3a6b9957f 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1525,6 +1525,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
         * logging_active is guaranteed to never be true for VM_PFNMAP
         * memslots.
         */
+       WARN_ON_ONCE(logging_active && (vma->vm_flags & VM_PFNMAP));
        if (logging_active || is_protected_kvm_enabled()) {
                force_pte = true;
                vma_shift = PAGE_SHIFT;

And I could easily get that thing to trigger --  the trick is to back a
memslot with standard anon memory, enable dirty logging, and then mmap()
with MAP_FIXED on top of that a VM_PFNMAP region, and KVM will happily
proceed. Note that this has nothing to do with your series, it's just an
existing upstream bug.

Sadly that means the vma checks we do in kvm_arch_prepare_memory_region()
are bogus. Memslots are associated with an HVA range, not the underlying
VMAs which are not guaranteed stable. This bug applies to both the
VM_PFNMAP checks and the MTE checks, I think.

I can't immediately think of a good way to make the checks more robust,
but I'll have a think. If anybody has an idea ... :-)

Thanks,
Quentin

> -	if (logging_active || is_protected_kvm_enabled()) {
> -		force_pte = true;
> +	if (force_pte)
>  		vma_shift = PAGE_SHIFT;
> -	} else {
> +	else
>  		vma_shift = get_vma_page_shift(vma, hva);
> -	}
>  
>  	switch (vma_shift) {
>  #ifndef __PAGETABLE_PMD_FOLDED
> -- 
> 2.48.1.711.g2feabab25a-goog
> 




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux