> On 12 Dec 2019, at 20:22, Barret Rhoden <brho@xxxxxxxxxx> wrote: > > This change allows KVM to map DAX-backed files made of huge pages with > huge mappings in the EPT/TDP. This change isn’t only relevant for TDP. It also affects when KVM use shadow-paging. See how FNAME(page_fault)() calls transparent_hugepage_adjust(). > > DAX pages are not PageTransCompound. The existing check is trying to > determine if the mapping for the pfn is a huge mapping or not. I would rephrase “The existing check is trying to determine if the pfn is mapped as part of a transparent huge-page”. > For > non-DAX maps, e.g. hugetlbfs, that means checking PageTransCompound. This is not related to hugetlbfs but rather THP. > For DAX, we can check the page table itself. > > Note that KVM already faulted in the page (or huge page) in the host's > page table, and we hold the KVM mmu spinlock. We grabbed that lock in > kvm_mmu_notifier_invalidate_range_end, before checking the mmu seq. > > Signed-off-by: Barret Rhoden <brho@xxxxxxxxxx> I don’t think the right place to change for this functionality is transparent_hugepage_adjust() which is meant to handle PFNs that are mapped as part of a transparent huge-page. For example, this would prevent mapping DAX-backed file page as 1GB. As transparent_hugepage_adjust() only handles the case (level == PT_PAGE_TABLE_LEVEL). As you are parsing the page-tables to discover the page-size the PFN is mapped in, I think you should instead modify kvm_host_page_size() to parse page-tables instead of rely on vma_kernel_pagesize() (Which relies on vma->vm_ops->pagesize()) in case of is_zone_device_page(). The main complication though of doing this is that at this point you don’t yet have the PFN that is retrieved by try_async_pf(). So maybe you should consider modifying the order of calls in tdp_page_fault() & FNAME(page_fault)(). -Liran > --- > arch/x86/kvm/mmu/mmu.c | 31 +++++++++++++++++++++++++++---- > 1 file changed, 27 insertions(+), 4 deletions(-) > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index 7269130ea5e2..ea8f6951398b 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -3328,6 +3328,30 @@ static void direct_pte_prefetch(struct kvm_vcpu *vcpu, u64 *sptep) > __direct_pte_prefetch(vcpu, sp, sptep); > } > > +static bool pfn_is_huge_mapped(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn) > +{ > + struct page *page = pfn_to_page(pfn); > + unsigned long hva; > + > + if (!is_zone_device_page(page)) > + return PageTransCompoundMap(page); > + > + /* > + * DAX pages do not use compound pages. The page should have already > + * been mapped into the host-side page table during try_async_pf(), so > + * we can check the page tables directly. > + */ > + hva = gfn_to_hva(kvm, gfn); > + if (kvm_is_error_hva(hva)) > + return false; > + > + /* > + * Our caller grabbed the KVM mmu_lock with a successful > + * mmu_notifier_retry, so we're safe to walk the page table. > + */ > + return dev_pagemap_mapping_shift(hva, current->mm) > PAGE_SHIFT; > +} > + > static void transparent_hugepage_adjust(struct kvm_vcpu *vcpu, > gfn_t gfn, kvm_pfn_t *pfnp, > int *levelp) > @@ -3342,8 +3366,8 @@ static void transparent_hugepage_adjust(struct kvm_vcpu *vcpu, > * here. > */ > if (!is_error_noslot_pfn(pfn) && !kvm_is_reserved_pfn(pfn) && > - !kvm_is_zone_device_pfn(pfn) && level == PT_PAGE_TABLE_LEVEL && > - PageTransCompoundMap(pfn_to_page(pfn))) { > + level == PT_PAGE_TABLE_LEVEL && > + pfn_is_huge_mapped(vcpu->kvm, gfn, pfn)) { > unsigned long mask; > > /* > @@ -5957,8 +5981,7 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm, > * mapping if the indirect sp has level = 1. > */ > if (sp->role.direct && !kvm_is_reserved_pfn(pfn) && > - !kvm_is_zone_device_pfn(pfn) && > - PageTransCompoundMap(pfn_to_page(pfn))) { > + pfn_is_huge_mapped(kvm, sp->gfn, pfn)) { > pte_list_remove(rmap_head, sptep); > > if (kvm_available_flush_tlb_with_range()) > -- > 2.24.0.525.g8f36a354ae-goog >