Redo kvm_mmu_page_fault()'s interaction with handle_mmio_page_fault() so that the behavior of falling through to mmu.page_fault() when handle_mmio_page_fault() returns RET_PF_INVALID is more obvious. The current approach of setting and checking RET_PF_INVALID outside of the MMIO flow can lead readers to believe that RET_PF_INVALID may be used for something other than signifying that the MMIO generation has changed. This is a purely cosmetic change, e.g. kvm.ko's kvm_mmu_page_fault is binary identical on my system before and after this patch Signed-off-by: Sean Christopherson <sean.j.christopherson@xxxxxxxxx> --- arch/x86/kvm/mmu.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index f551962ac294..662bb448c7fc 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -4927,21 +4927,21 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u64 error_code, vcpu->arch.gpa_val = cr2; } - r = RET_PF_INVALID; if (unlikely(error_code & PFERR_RSVD_MASK)) { r = handle_mmio_page_fault(vcpu, cr2, direct); if (r == RET_PF_EMULATE) { emulation_type = 0; goto emulate; } + if (r != RET_PF_INVALID) + goto pf_done; } - if (r == RET_PF_INVALID) { - r = vcpu->arch.mmu.page_fault(vcpu, cr2, lower_32_bits(error_code), - false); - WARN_ON(r == RET_PF_INVALID); - } + r = vcpu->arch.mmu.page_fault(vcpu, cr2, lower_32_bits(error_code), + false); + WARN_ON(r == RET_PF_INVALID); +pf_done: if (r == RET_PF_RETRY) return 1; if (r < 0) -- 2.16.2