Clean up KVM's struct page / pfn helpers to reduce the number of pfn_to_page() and page_to_pfn() conversions. E.g. kvm_release_pfn_dirty() makes 6 (if I counted right) calls to pfn_to_page() when releasing a dirty pfn that backed by a vanilla struct page. That is easily trimmed down to a single call. And perhaps more importantly, rename and refactor kvm_is_reserved_pfn() to try and better reflect what it actually queries, which at this point is effectively whether or not the pfn is backed by a refcounted page. Sean Christopherson (10): KVM: Do not zero initialize 'pfn' in hva_to_pfn() KVM: Drop bogus "pfn != 0" guard from kvm_release_pfn() KVM: Don't set Accessed/Dirty bits for ZERO_PAGE KVM: Avoid pfn_to_page() and vice versa when releasing pages KVM: nVMX: Use kvm_vcpu_map() to get/pin vmcs12's APIC-access page KVM: Don't WARN if kvm_pfn_to_page() encounters a "reserved" pfn KVM: Remove kvm_vcpu_gfn_to_page() and kvm_vcpu_gpa_to_page() KVM: Take a 'struct page', not a pfn in kvm_is_zone_device_page() KVM: Rename/refactor kvm_is_reserved_pfn() to kvm_pfn_to_refcounted_page() KVM: x86/mmu: Shove refcounted page dependency into host_pfn_mapping_level() arch/x86/kvm/mmu/mmu.c | 26 +++++-- arch/x86/kvm/mmu/tdp_mmu.c | 3 +- arch/x86/kvm/vmx/nested.c | 39 ++++------- arch/x86/kvm/vmx/vmx.h | 2 +- include/linux/kvm_host.h | 12 +--- virt/kvm/kvm_main.c | 140 +++++++++++++++++++++++++------------ 6 files changed, 131 insertions(+), 91 deletions(-) base-commit: 2a39d8b39bffdaf1a4223d0d22f07baee154c8f3 -- 2.36.0.464.gb9c8b46e94-goog