On Tue, Nov 5, 2019 at 3:13 PM Sean Christopherson <sean.j.christopherson@xxxxxxxxx> wrote: > > On Tue, Nov 05, 2019 at 03:02:40PM -0800, Dan Williams wrote: > > On Tue, Nov 5, 2019 at 12:31 PM David Hildenbrand <david@xxxxxxxxxx> wrote: > > > > The scarier code (for me) is transparent_hugepage_adjust() and > > > > kvm_mmu_zap_collapsible_spte(), as I don't at all understand the > > > > interaction between THP and _PAGE_DEVMAP. > > > > > > The x86 KVM MMU code is one of the ugliest code I know (sorry, but it > > > had to be said :/ ). Luckily, this should be independent of the > > > PG_reserved thingy AFAIKs. > > > > Both transparent_hugepage_adjust() and kvm_mmu_zap_collapsible_spte() > > are honoring kvm_is_reserved_pfn(), so again I'm missing where the > > page count gets mismanaged and leads to the reported hang. > > When mapping pages into the guest, KVM gets the page via gup(), which > increments the page count for ZONE_DEVICE pages. But KVM puts the page > using kvm_release_pfn_clean(), which skips put_page() if PageReserved() > and so never puts its reference to ZONE_DEVICE pages. Oh, yeah, that's busted. > My transparent_hugepage_adjust() and kvm_mmu_zap_collapsible_spte() > comments were for a post-patch/series scenario wheren PageReserved() is > no longer true for ZONE_DEVICE pages. Ah, ok, for that David is preserving kvm_is_reserved_pfn() returning true for ZONE_DEVICE because pfn_to_online_page() will fail for ZONE_DEVICE.