> On 12 Dec 2019, at 19:34, Sean Christopherson <sean.j.christopherson@xxxxxxxxx> wrote: > > On Wed, Dec 11, 2019 at 04:32:07PM -0500, Barret Rhoden wrote: >> This change allows KVM to map DAX-backed files made of huge pages with >> huge mappings in the EPT/TDP. >> >> DAX pages are not PageTransCompound. The existing check is trying to >> determine if the mapping for the pfn is a huge mapping or not. For >> non-DAX maps, e.g. hugetlbfs, that means checking PageTransCompound. >> For DAX, we can check the page table itself. >> >> Note that KVM already faulted in the page (or huge page) in the host's >> page table, and we hold the KVM mmu spinlock. We grabbed that lock in >> kvm_mmu_notifier_invalidate_range_end, before checking the mmu seq. >> >> Signed-off-by: Barret Rhoden <brho@xxxxxxxxxx> >> --- >> arch/x86/kvm/mmu/mmu.c | 36 ++++++++++++++++++++++++++++++++---- >> 1 file changed, 32 insertions(+), 4 deletions(-) >> >> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c >> index 6f92b40d798c..cd07bc4e595f 100644 >> --- a/arch/x86/kvm/mmu/mmu.c >> +++ b/arch/x86/kvm/mmu/mmu.c >> @@ -3384,6 +3384,35 @@ static int kvm_handle_bad_page(struct kvm_vcpu *vcpu, gfn_t gfn, kvm_pfn_t pfn) >> return -EFAULT; >> } >> >> +static bool pfn_is_huge_mapped(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn) >> +{ >> + struct page *page = pfn_to_page(pfn); >> + unsigned long hva; >> + >> + if (!is_zone_device_page(page)) >> + return PageTransCompoundMap(page); >> + >> + /* >> + * DAX pages do not use compound pages. The page should have already >> + * been mapped into the host-side page table during try_async_pf(), so >> + * we can check the page tables directly. >> + */ >> + hva = gfn_to_hva(kvm, gfn); >> + if (kvm_is_error_hva(hva)) >> + return false; >> + >> + /* >> + * Our caller grabbed the KVM mmu_lock with a successful >> + * mmu_notifier_retry, so we're safe to walk the page table. >> + */ >> + switch (dev_pagemap_mapping_shift(hva, current->mm)) { >> + case PMD_SHIFT: >> + case PUD_SIZE: > > I assume this means DAX can have 1GB pages? I ask because KVM's THP logic > has historically relied on THP only supporting 2MB. I cleaned this up in > a recent series[*], which is in kvm/queue, but I obviously didn't actually > test whether or not KVM would correctly handle 1GB non-hugetlbfs pages. KVM doesn’t handle 1GB correctly for all types of non-hugetlbfs pages. One example we have noticed internally but haven’t submitted an upstream patch yet is for pages without “struct page”. As in this case, hva_to_pfn() will notice vma->vm_flags have VM_PFNMAP set and call hva_to_pfn_remapped() -> follow_pfn(). However, follow_pfn() currently just calls follow_pte() which use __follow_pte_pmd() that doesn’t handle a huge PUD entry. > > The easiest thing is probably to rebase on kvm/queue. You'll need to do > that anyways, and I suspect doing so will help shake out any hiccups. > > [*] https://urldefense.proofpoint.com/v2/url?u=https-3A__lkml.kernel.org_r_20191206235729.29263-2D1-2Dsean.j.christopherson-40intel.com&d=DwIBAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=Jk6Q8nNzkQ6LJ6g42qARkg6ryIDGQr-yKXPNGZbpTx0&m=Lk-PXE125WU3GWJOV4U4crsSEFx7f5AUmRJhkrfIeAE&s=BIo4tnL4OfswRQ2QKfTs9VYScLU5lBy2pwzePBnHow8&e= > >> + return true; >> + } >> + return false; >> +} >> + >> static void transparent_hugepage_adjust(struct kvm_vcpu *vcpu, >> gfn_t gfn, kvm_pfn_t *pfnp, >> int *levelp) >> @@ -3398,8 +3427,8 @@ static void transparent_hugepage_adjust(struct kvm_vcpu *vcpu, >> * here. >> */ >> if (!is_error_noslot_pfn(pfn) && !kvm_is_reserved_pfn(pfn) && >> - !kvm_is_zone_device_pfn(pfn) && level == PT_PAGE_TABLE_LEVEL && >> - PageTransCompoundMap(pfn_to_page(pfn)) && >> + level == PT_PAGE_TABLE_LEVEL && >> + pfn_is_huge_mapped(vcpu->kvm, gfn, pfn) && >> !mmu_gfn_lpage_is_disallowed(vcpu, gfn, PT_DIRECTORY_LEVEL)) { >> unsigned long mask; >> /* >> @@ -6015,8 +6044,7 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm, >> * mapping if the indirect sp has level = 1. >> */ >> if (sp->role.direct && !kvm_is_reserved_pfn(pfn) && >> - !kvm_is_zone_device_pfn(pfn) && >> - PageTransCompoundMap(pfn_to_page(pfn))) { >> + pfn_is_huge_mapped(kvm, sp->gfn, pfn)) { >> pte_list_remove(rmap_head, sptep); >> >> if (kvm_available_flush_tlb_with_range()) >> -- >> 2.24.0.525.g8f36a354ae-goog >>