If we are checking whether the stage2 can map PAGE_SIZE, we don't have to do the boundary checks as both the host VMA and the guest memslots are page aligned. Bail the case easily. Cc: Christoffer Dall <christoffer.dall@xxxxxxx> Cc: Marc Zyngier <marc.zyngier@xxxxxxx> Signed-off-by: Suzuki K Poulose <suzuki.poulose@xxxxxxx> --- virt/kvm/arm/mmu.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index a39dcfd..6d73322 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -1624,6 +1624,10 @@ static bool fault_supports_stage2_huge_mapping(struct kvm_memory_slot *memslot, hva_t uaddr_start, uaddr_end; size_t size; + /* The memslot and the VMA are guaranteed to be aligned to PAGE_SIZE */ + if (map_size == PAGE_SIZE) + return true; + size = memslot->npages * PAGE_SIZE; gpa_start = memslot->base_gfn << PAGE_SHIFT; -- 2.7.4