mmu_try_to_unsync_pages() performs !can_unsync check before attempting to unsync any shadow pages. This check is peformed inside the loop right now. It's redundant to perform it every iteration if can_unsync is true, as can_unsync parameter isn't getting updated inside the loop. Move the check outside of the loop. Same is the case with prefetch. Signed-off-by: Vihas Mak <makvihas@xxxxxxxxx> Cc: Sean Christopherson <seanjc@xxxxxxxxxx> Cc: Vitaly Kuznetsov <vkuznets@xxxxxxxxxx> Cc: Wanpeng Li <wanpengli@xxxxxxxxxxx> Cc: Jim Mattson <jmattson@xxxxxxxxxx> Cc: Joerg Roedel <joro@xxxxxxxxxx> --- arch/x86/kvm/mmu/mmu.c | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 1d275e9d7..53f4b8b07 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2586,6 +2586,11 @@ int mmu_try_to_unsync_pages(struct kvm *kvm, const struct kvm_memory_slot *slot, if (kvm_slot_page_track_is_active(kvm, slot, gfn, KVM_PAGE_TRACK_WRITE)) return -EPERM; + if (!can_unsync) + return -EPERM; + + if (prefetch) + return -EEXIST; /* * The page is not write-tracked, mark existing shadow pages unsync * unless KVM is synchronizing an unsync SP (can_unsync = false). In @@ -2593,15 +2598,9 @@ int mmu_try_to_unsync_pages(struct kvm *kvm, const struct kvm_memory_slot *slot, * allowing shadow pages to become unsync (writable by the guest). */ for_each_gfn_indirect_valid_sp(kvm, sp, gfn) { - if (!can_unsync) - return -EPERM; - if (sp->unsync) continue; - if (prefetch) - return -EEXIST; - /* * TDP MMU page faults require an additional spinlock as they * run with mmu_lock held for read, not write, and the unsync -- 2.30.2