Hi Gavin, On 2021/4/21 14:38, Gavin Shan wrote: > Hi Keqian, > > On 4/16/21 12:03 AM, Keqian Zhu wrote: >> The MMIO regions may be unmapped for many reasons and can be remapped >> by stage2 fault path. Map MMIO regions at creation time becomes a >> minor optimization and makes these two mapping path hard to sync. >> >> Remove the mapping code while keep the useful sanity check. >> >> Signed-off-by: Keqian Zhu <zhukeqian1@xxxxxxxxxx> >> --- >> arch/arm64/kvm/mmu.c | 38 +++----------------------------------- >> 1 file changed, 3 insertions(+), 35 deletions(-) >> > > After removing the logic to create stage2 mapping for VM_PFNMAP region, > I think the "do { } while" loop becomes unnecessary and can be dropped > completely. It means the only sanity check is to see if the memory slot > overflows IPA space or not. In that case, KVM_MR_FLAGS_ONLY can be > ignored because the memory slot's base address and length aren't changed > when we have KVM_MR_FLAGS_ONLY. Maybe not exactly. Here we do an important sanity check that we shouldn't log dirty for memslots with VM_PFNMAP. > > It seems the patch isn't based on "next" branch because find_vma() was > replaced by find_vma_intersection() by one of my patches :) Yep, I remember it. I will replace it at next merge window... Thanks, Keqian > > Thanks, > Gavin > >> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c >> index 8711894db8c2..c59af5ca01b0 100644 >> --- a/arch/arm64/kvm/mmu.c >> +++ b/arch/arm64/kvm/mmu.c >> @@ -1301,7 +1301,6 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, >> { >> hva_t hva = mem->userspace_addr; >> hva_t reg_end = hva + mem->memory_size; >> - bool writable = !(mem->flags & KVM_MEM_READONLY); >> int ret = 0; >> if (change != KVM_MR_CREATE && change != KVM_MR_MOVE && >> @@ -1318,8 +1317,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, >> mmap_read_lock(current->mm); >> /* >> * A memory region could potentially cover multiple VMAs, and any holes >> - * between them, so iterate over all of them to find out if we can map >> - * any of them right now. >> + * between them, so iterate over all of them. >> * >> * +--------------------------------------------+ >> * +---------------+----------------+ +----------------+ >> @@ -1330,50 +1328,20 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, >> */ >> do { >> struct vm_area_struct *vma = find_vma(current->mm, hva); >> - hva_t vm_start, vm_end; >> if (!vma || vma->vm_start >= reg_end) >> break; >> - /* >> - * Take the intersection of this VMA with the memory region >> - */ >> - vm_start = max(hva, vma->vm_start); >> - vm_end = min(reg_end, vma->vm_end); >> - >> if (vma->vm_flags & VM_PFNMAP) { >> - gpa_t gpa = mem->guest_phys_addr + >> - (vm_start - mem->userspace_addr); >> - phys_addr_t pa; >> - >> - pa = (phys_addr_t)vma->vm_pgoff << PAGE_SHIFT; >> - pa += vm_start - vma->vm_start; >> - >> /* IO region dirty page logging not allowed */ >> if (memslot->flags & KVM_MEM_LOG_DIRTY_PAGES) { >> ret = -EINVAL; >> - goto out; >> - } >> - >> - ret = kvm_phys_addr_ioremap(kvm, gpa, pa, >> - vm_end - vm_start, >> - writable); >> - if (ret) >> break; >> + } >> } >> - hva = vm_end; >> + hva = min(reg_end, vma->vm_end); >> } while (hva < reg_end); >> - if (change == KVM_MR_FLAGS_ONLY) >> - goto out; >> - >> - spin_lock(&kvm->mmu_lock); >> - if (ret) >> - unmap_stage2_range(&kvm->arch.mmu, mem->guest_phys_addr, mem->memory_size); >> - else if (!cpus_have_final_cap(ARM64_HAS_STAGE2_FWB)) >> - stage2_flush_memslot(kvm, memslot); >> - spin_unlock(&kvm->mmu_lock); >> -out: >> mmap_read_unlock(current->mm); >> return ret; >> } >> > > . >