Replace the is_error_noslot_pfn() check in transparent_hugepage_adjust() with an is_noslot_pfn() check. thp_adjust() cannot be reached with an error pfn as it is always called after handle_abnormal_pfn(), which aborts the page fault handler if an error pfn is encountered. Don't bother future proofing thp_adjust() with a WARN on is_error_pfn(), as calling thp_adjust() before handle_abnormal_pfn() is impossible for all intents and purposes, e.g. thp_adjust() relies on being called after mmu_notifier_retry() and while holding mmu_lock, thus moving it would essentially require a complete rewrite of KVM's page fault handlers. No functional change intended. Reported-by: Paolo Bonzini <pbonzini@xxxxxxxxxx> Signed-off-by: Sean Christopherson <sean.j.christopherson@xxxxxxxxx> --- arch/x86/kvm/mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index bf82b1f2e834..c35c6fb2635a 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -3305,7 +3305,7 @@ static void transparent_hugepage_adjust(struct kvm_vcpu *vcpu, * PT_PAGE_TABLE_LEVEL and there would be no adjustment done * here. */ - if (!is_error_noslot_pfn(pfn) && !kvm_is_reserved_pfn(pfn) && + if (!is_noslot_pfn(pfn) && !kvm_is_reserved_pfn(pfn) && !kvm_is_zone_device_pfn(pfn) && level == PT_PAGE_TABLE_LEVEL && PageTransCompoundMap(pfn_to_page(pfn)) && !mmu_gfn_lpage_is_disallowed(vcpu, gfn, PT_DIRECTORY_LEVEL)) { -- 2.24.0