Right now, ZONE_DEVICE memory is always set PG_reserved. We want to change that. KVM has this weird use case that you can map anything from /dev/mem into the guest. pfn_valid() is not a reliable check whether the memmap was initialized and can be touched. pfn_to_online_page() makes sure that we have an initialized memmap (and don't have ZONE_DEVICE memory). Rewrite kvm_is_reserved_pfn() to make sure the function produces the same result once we stop setting ZONE_DEVICE pages PG_reserved. Cc: Paolo Bonzini <pbonzini@xxxxxxxxxx> Cc: "Radim Krčmář" <rkrcmar@xxxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxxxx> Cc: Dan Williams <dan.j.williams@xxxxxxxxx> Cc: KarimAllah Ahmed <karahmed@xxxxxxxxx> Signed-off-by: David Hildenbrand <david@xxxxxxxxxx> --- virt/kvm/kvm_main.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index e9eb666eb6e8..9d18cc67d124 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -151,9 +151,15 @@ __weak int kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm, bool kvm_is_reserved_pfn(kvm_pfn_t pfn) { - if (pfn_valid(pfn)) - return PageReserved(pfn_to_page(pfn)); + struct page *page = pfn_to_online_page(pfn); + /* + * We treat any pages that are not online (not managed by the buddy) + * as reserved - this includes ZONE_DEVICE pages and pages without + * a memmap (e.g., mapped via /dev/mem). + */ + if (page) + return PageReserved(page); return true; } -- 2.21.0