The patch titled Subject: powerpc/mm/iommu: allow large IOMMU page size only for hugetlb backing has been added to the -mm tree. Its filename is powerpc-mm-iommu-allow-large-iommu-page-size-only-for-hugetlb-backing.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/powerpc-mm-iommu-allow-large-iommu-page-size-only-for-hugetlb-backing.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/powerpc-mm-iommu-allow-large-iommu-page-size-only-for-hugetlb-backing.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: "Aneesh Kumar K.V" <aneesh.kumar@xxxxxxxxxxxxx> Subject: powerpc/mm/iommu: allow large IOMMU page size only for hugetlb backing THP pages can get split during different code paths. An incremented reference count does imply we will not split the compound page. But the pmd entry can be converted to level 4 pte entries. Keep the code simpler by allowing large IOMMU page size only if the guest ram is backed by hugetlb pages. Link: http://lkml.kernel.org/r/20190114095438.32470-6-aneesh.kumar@xxxxxxxxxxxxx Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@xxxxxxxxxxxxx> Cc: Alexey Kardashevskiy <aik@xxxxxxxxx> Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx> Cc: David Gibson <david@xxxxxxxxxxxxxxxxxxxxx> Cc: Michael Ellerman <mpe@xxxxxxxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxxxx> Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- arch/powerpc/mm/mmu_context_iommu.c | 22 ++++++---------------- 1 file changed, 6 insertions(+), 16 deletions(-) --- a/arch/powerpc/mm/mmu_context_iommu.c~powerpc-mm-iommu-allow-large-iommu-page-size-only-for-hugetlb-backing +++ a/arch/powerpc/mm/mmu_context_iommu.c @@ -98,8 +98,6 @@ static long mm_iommu_do_alloc(struct mm_ struct mm_iommu_table_group_mem_t *mem; long i, ret = 0, locked_entries = 0; unsigned int pageshift; - unsigned long flags; - unsigned long cur_ua; mutex_lock(&mem_list_mutex); @@ -169,22 +167,14 @@ static long mm_iommu_do_alloc(struct mm_ for (i = 0; i < entries; ++i) { struct page *page = mem->hpages[i]; - cur_ua = ua + (i << PAGE_SHIFT); - if (mem->pageshift > PAGE_SHIFT && PageCompound(page)) { - pte_t *pte; + /* + * Allow to use larger than 64k IOMMU pages. Only do that + * if we are backed by hugetlb. + */ + if ((mem->pageshift > PAGE_SHIFT) && PageHuge(page)) { struct page *head = compound_head(page); - unsigned int compshift = compound_order(head); - unsigned int pteshift; - - local_irq_save(flags); /* disables as well */ - pte = find_linux_pte(mm->pgd, cur_ua, NULL, &pteshift); - /* Double check it is still the same pinned page */ - if (pte && pte_page(*pte) == head && - pteshift == compshift + PAGE_SHIFT) - pageshift = max_t(unsigned int, pteshift, - PAGE_SHIFT); - local_irq_restore(flags); + pageshift = compound_order(head) + PAGE_SHIFT; } mem->pageshift = min(mem->pageshift, pageshift); /* _ Patches currently in -mm which might be from aneesh.kumar@xxxxxxxxxxxxx are mm-update-ptep_modify_prot_start-commit-to-take-vm_area_struct-as-arg.patch mm-update-ptep_modify_prot_commit-to-take-old-pte-value-as-arg.patch arch-powerpc-mm-nest-mmu-workaround-for-mprotect-rw-upgrade.patch mm-hugetlb-add-prot_modify_start-commit-sequence-for-hugetlb-update.patch arch-powerpc-mm-hugetlb-nestmmu-workaround-for-hugetlb-mprotect-rw-upgrade.patch mm-cma-add-pf-flag-to-force-non-cma-alloc.patch mm-update-get_user_pages_longterm-to-migrate-pages-allocated-from-cma-region.patch powerpc-mm-iommu-allow-migration-of-cma-allocated-pages-during-mm_iommu_do_alloc.patch powerpc-mm-iommu-allow-large-iommu-page-size-only-for-hugetlb-backing.patch